Tuesday, April 27, 2010

Opening Doors

"Opening Doors to Faculty Involvement in Assessment" is the title of a new paper by Pat Hutchings, published by the National Institute for Learning Outcomes Assessment. Here's the thesis:
The assessment literature is replete with admonitions about the importance of faculty involvement, a kind of gold standard widely understood to be the key to assessment’s impact “on the ground,” in classrooms where teachers and students meet. Unfortunately, much of what has been done in the name of assessment has failed to engage large numbers of faculty in significant ways.
She ultimately suggests some remedies:
  1. Build assessment around the regular, ongoing work of teaching and learning;
  2. Make a place for assessment in faculty development;
  3. Integrate assessment into the preparation of graduate students;
  4. Reframe assessment as scholarship;
  5. Create campus spaces and occasions for constructive assessment conversation and action; and
  6. Involve students in assessment.
There's a lot to dissect here. Let's start with the big picture: what is faculty-driven assessment supposed to achieve? If the answer is better classroom instruction, that's one thing. That's easy. If the answer is to ensure that graduates are prepared for the work force in the name of accountability, that's another issue entirely. In this paper, the ultimate outcome isn't clear to me. I've asked this question at conferences--including once to Peter Ewell (who wrote a forward to this piece), asking for the government to give us summative information about employment histories of graduates by institution (from the IRS, I presume) so that we could actually see what happens to them, at least in terms of earning power. I wrote an article for U. Business with the same plea. I've asked at the state level. Forget about standardized tests--this would give us actual information, not proxies--for something close to accountability. We could look at total cost compared to financial outcomes and employment chances. This matters because it could affect big curriculum changes in a way that classroom-centered assessment cannot. We can debate whether such a mercantile view of education has merit, but the results could be surprising, as this Wall Street Journal report, and my analysis of it, shows. The article does not give us a hard goal as the outcome of assessment, which ironically typical of discussions about assessment.  This confusion between micro and macro is one of the obstacles to getting anything done.  In his forward, Peter Ewell writes:
Now we have creative and authentic standardized general skills tests like the Collegiate Learning Assessment (CLA) and the Critical-Thinking Assessment Test (CAT), as well as a range of solid techniques like curriculum mapping, rubric-based grading, and electronic portfolios. These technical developments have yielded valid mechanisms for gathering evidence of student performance that look a lot more like how faculty do this than ScanTron forms and bubble sheets.
The assessment techniques here range from general standardized tests suitable (maybe) only for comparing institutions to archival techniques for individual student work, encompassing a spectrum of approaches to assessment, and more or less suitable depending on the ultimate outcome.  The descriptors "authentic" and "valid" are contingent on what the use is. The CLA isn't useful for determining if a math major has learned any math, or a dance major has learned any dance.  Is it useful for predicting employment?  Who knows? Although it doesn't look like a bubble sheet, it correlates very highly with SAT, so the effect is arguably the same.  Mechanisms, of course, can't be valid (only propositions can), and I think the excerpt more than anything is a statement about what is fashionable in the view of top-down assessment management. 

The author talks explicitly about  the management of assessment activities (pg 9):
“If one endorsed the idea that, say, a truly successful liberal arts education is transformative or inspires wonder, the language of inputs and outputs and ‘value added’ leaves one cold” (Struck, 2007, p. 2). In short, it is striking how quickly assessment can come to be seen as part of “the management culture” (Walvoord, 2004, p. 7) rather than as a process at the heart of faculty’s work and interactions with students.
I think this is accurate, and one has to admit that assessment is largely driven by management, starting with accreditors in many cases. The best classroom-assessment cultures are probably built bottom-up from the faculty, but the impetus for assessment is top-down. This is not helped by the language and culture of the community of assessment professionals, which is heavily influenced by the testing = measurement = reality philosophy home to Educational Psychology programs and standardized testing companies. 

The view from a discipline expert is naturally dubious of the claims that learning can be weighed up like a sack of potatoes, and the neural states of a hundred billion brain cells can be summarized in a seven-bit statistic with an accuracy and implicit model that can predict future behavior in some important respect.  Aren't critical thinkers supposed to be skeptical of claims like that?

Management can be hypocritical too.   The standard line is that grades aren't assessments, implying that grades are independent of learning.  If this is true, the whole schema for assigning and recording grades is a colossal fraud that the management (from the feds down) ought to be rooting out and replacing with assessments they believe in.  How many institutions other than WGU don't give grades?  And why do test makers use GPAs in making arguments for validity? (Edit:  CLA, for example.)

On the other hand, it makes perfect sense to a faculty member to focus on what happens in the classroom.  Good teachers, chairs, and program coordinators, already make improvements on what they see.  It makes sense to institutionalize this by rewarding the activity, advertising techniques that seem to work and focusing attention on learning.  "Opening Doors" primarily focuses on how this might be done, and is recommended reading for assessment directors who work with faculty as a facilitator.  One of the problems is that PhDs are given degrees for knowledge of a discipline, not teaching effectiveness.  Once employed, they find out about routine administration, and questions identified by the author "what purposes and goals are most important, whether those goals are met, and how to do better" are left out in hyperspace:
Ironically, however, they have not been questions that naturally arise in the daily work of the professoriate or, say, in department meetings, which are more likely to deal with parking and schedules than with student learning.
Have you ever thought about how much bureaucracy there is in a university?  All the apportionment of time and resources, the forms and signatures, accounting and correspondence, distractions and procedures?  And what is the management approach to inducing a culture of assessment?  More bureaucracy.  Like giant databases that are supposed to give measurements of layered dimensions of learning outcomes.  From page 12 we have:
Some campuses are now employing online data management systems, like E-Lumen and TracDat, that invite faculty input into and access to assessment data (Hutchings, 2009). With developments like these facilitating faculty interest and engagement in ways impossible (or impossibly time consuming or technical) in assessment’s early days, new opportunities are on the rise.
Anyone familiar with this sort of system knows it doesn't facilitate faculty interest and engagement; it sends most of them running to higheredjobs.com. Turning teachers into bureaucrats isn't the answer.

On the other side, the author talks about anecdotal evidence for (pg 7)
[...] assessment’s power to prompt collective faculty conversation about purposes, often for the first time; about discovering the need to be more explicit about goals for student learning; about finding better ways to know whether those goals are being met; and about shaping and sharing feedback that can strengthen student learning.
A couple of paragraphs earlier she quotes one faculty member as saying “assessment is asking whether my
students are learning what I am teaching.”  This makes sense.  I gave a lecture once on the fundamental theorem of calculus that I thought was simply brilliant in clarity and exposition.  My bubble was burst almost immediately--I realized from their reactions and Q&A that the students hadn't understood it.  It was all a bunch of gobble-dee-goop symbols on the board to them.  It makes sense to try to fix that.  Please just don't try to do it like No Child Left Behind's ubiquitous bureaucracy or other top-down arrogance.  At the top, please just clearly articulate a goal that you can provide real, unequivocal evidence for (employment statistics, salaries, graduation rates, family size, language fluency, NOT some vague learning outcome--that's only a means to and end if it even can be said to exist).  Even then, remember that higher ed might be compared to the final process in an assembly line where the finishing touches are put on a car.  Outcomes like employment typically start right after graduation, but that doesn't mean that higher ed can be solely held responsible for the result.  If you want to raise the intellectual capacity of the country, figure out how to turn off all the vacuous "flickering lights" entertainment that inundates young minds--I bet that would have a massive effect on literacy rates.  But I'm probably biased because I grew up without a TV.  At any rate, this is not a higher ed problem, this is a societal problem--one that أبو زيد عبد الرحمن بن محمد بن خلدون الحضرمي wrote about in The Muqaddimah in the 14th century: dynasties fail because the success leads inevitably to failure (my paraphrasing).

The author sorts through some of the reasons for slow adoption of assessment.  One is that the "work of assessment is an uneasy match with institutional reward systems."  I think this is on the money.  If you look at your institution's way of evaluating teaching, chances are it relies heavily on a "customer-service" survey in the form of a standardized teaching evaluation done by students.  A formal version of ratemyprofessor.com.  This might seem fair, since everyone gets the same survey, but it's not--it's just easy.  The author mentions later on the Peer Review of Teaching Project, which was new to me.  This looks like a rich and healthy approach to teaching evaluation that would naturally involve assessment activities.  I'm looking for something like this to start a conversation at my university.  Here are some key questions identified from the website:
  • How can I show the intellectual work of teaching that takes place inside and outside of my classroom?
  • How can I systematically investigate, analyze, and document my students’ learning?
  • How can I communicate this intellectual work to campus or disciplinary conversations?
In my "view from the battlefield," the author makes a mistake in endorsing standardized testing as an answer--a lurch back into the bureaucratic viewpoint (pg 12):
The Collegiate Learning Assessment (CLA), for instance, forgoes reductive multiple-choice formats in favor of authentic tasks that would be at home in the best classrooms; CLA leaders now offer workshops to help faculty design similar tasks for their own classrooms, the idea being that these activities are precisely what students need to build and improve their critical thinking and problem-solving skills.
Here's an example of one such CLA prompt on a "make-an-argument" item, taken from one of their advertisements:
Government money would be better spent on preventing crime than in dealing with criminals after the fact.
Feel free to gasp with horror, but this sort of thing would not be at home in any of the classes I've ever taught in math or computer science.  Am I supposed to stop teaching computer architecture for a day and hold a discussion about rhetoric?  The prompt is obviously too general to have a correct answer, so I suppose the point is to see whether or not the respondent can argue well.  That's all wonderful, but it's not what the student studies math for.  Let me put it another way: would you rather fly on a plane that was designed by an engineer who knew a lot about engineering or one that got top marks on the prompt above?  To propose to compare institutions or give a measurement of "value-added" based on this stuff is ludicrous in the space where discipline-based instruction happens.  Maybe it makes sense in political science or a rhetoric class.

On the other hand, this sort of thing would look really great to politicians who deal with questions like this every day.  To them, this may be authentic.  To discipline experts, probably not.  But I have a solution: shouldn't they be learning that stuff in high school or even earlier?  I realize it's the antithesis of No Child Left Behind-style thinking, but maybe it's worth considering...

The second half of the quote above is frightening, implying that it's a good thing that the test maker can become a consultant on how to improve scores on the test.  Let's take that as a critical thinking exercise of the sort that CLA espouses.  Here are my hypotheses (I wrote about this first here):
  1. CLA is taken seriously as a way to assess valued added learning and compare institutions (this is what they advertise)
  2. CLA consultants can effectively increase an institution's scores on the test (this isn't hard to believe, since they know how the thing is scored).
Conclusion: there is economic benefit to using CLA + consultants in that it makes your institution look better relative to those who don't.  This conclusion is independent of any assumptions about learning.  It creates a system where the testing company controls the inputs and the outputs, much like SAT and SAT prep.  It's a successful business model: create a problem and sell the solution.  Unless you're really, really sure that your conclusions about the test results are useful, it's not smart to be on the receiving end of this unless you can just afford to blow the money and call it advertising.

Despite the odd misstep into virulent bureaucracy and too much enthusiasm for top-down assessments not tied to objective top-level goals, the article gives excellent advice for building the culture of assessment our accreditors are always going on about.  The recommendations are practical and useful, and can address the assessment problem at the troop level.  The big issue of getting the top-down approach fixed is not the topic of the article, and given where we are, is probably too much to hope for.

2 comments:

  1. Anonymous9:59 AM

    Learning happens at the individual level, not institutional. The greatest variability is within an institution, not between. There is no valid use of institution level data. It does not mean anything.

    ReplyDelete
  2. Although I want to agree with you, we can imagine counterexamples. If the US decided that having more engineers was in its strategic interest, the government could find ways to increase the number of grads in those disciplines like it did during the space race. The institutional level outcome of what percentage of grads are engineering majors would be real and meaningful to judge the success of the programs. What seems to be lacking now is any real definition of what strategic goals the US wants out of higher education, defined in a clear and obviously measurable way. Maybe this is the fault of the higher ed advocacy leadership, who want to own that for the academy, but it seems to me that the gov't DOES have strategic interests at stake and ought to articulate them.

    ReplyDelete