Friday, March 25, 2011

Why Assessment?

When the word came filtering down through the academic rumor enhancement facility (an endowed building, thankfully) that we'd have to do assessment--this happening in a past demarcated by one (1) millennium post, one (1) century post, and two (2) decades--we academics ran for cover. Some are still down there in their fox holes, waiting for the war to be over, not having access to the headlines proclaiming victory for the other side.

I have to wonder why the emphasis was ever put on assessment in the beginning? It's certainly a bad word to build a PR campaign around, like trying to advertise lamprey on the menu. Anyway, assessment isn't the point at all. This has only recently occurred to me, which may be symptomatic of all those sedimentation layers from said rumor enhancement facility that piled 'assessment' on top of 'measurement' and so on and on. You could make chalk out of the stuff if you had the patience.

When I finally made the synaptic leap, it was a shock. But perhaps I can be excused that because of the continual sustained emphasis on assessment in conferences, publications, and public discourse.

Assessment is one component of a theoretical institutional effectiveness process, but it's not the most important one. The spiral of excellence is to be climbed by:
  1. Setting goals
  2. Finding assessments for same
  3. Gathering data and analyzing it
  4. Making changes that may improve things
The first step and the last are the most important, and if you could only pick one, number four would be it. If the mandate were to set goals and try to achieve them, anyone taking that seriously would pretty quickly figure out some sort of assessment was in order. So why do we start with assessment first? Historical accident?

Compounding the premier position of assessment is the way we sometimes talk about the activity, as if we were doing science in order to do engineering. What I mean here is that I characterize science as pinning down cause and effect, whereas engineering is the use of those principles to accomplish some aim. This is all mixed up in learning outcomes assessment, because it's often expected to do both at the same time. Here again, the heavy emphasis on assessment gets in the way. To find a link between cause and effect, we have to vary starting conditions, and compare ending conditions. Then we have to hope that the universe is reasonable and lets us get away with inductively assuming that because it worked last time it will work again the next time.

Suppose we want to test some variations of fertilizers on corn to see which one works best. We'd try to keep everything constant except our treatment (type of fertilizer), which varies in some suitable way. We'd be careful to randomize over plots of ground to average out soil or topographical variance, and so on. We would carefully document all the conditions (water, acidity, parasites, etc.) during the experiment, and finally assess the yield at the end. Now all that stuff I just reeled off is part of the experiment, but with learning outcomes we typically wish everything away except for the last bit--assessing the yield. I have yet to see an effectiveness model that has a box asking for the experimental conditions during the teaching process. Maybe that's because few would do it.

The effect of this casual approach to experimentation is to make the assessment less valid. This may be remedied to some extent by bringing back into light the context, which is exactly why having teaching faculty involved with broad discussions that include but are not limited to assessment results is so important. Given the typical (and understandable) lack of rigor for most learning assessments, the results are more like a psychic reading than science. And that's fine. It should be told that way, so that faculty members don't get so stressed out about a "scientific" use of the results. Focusing more heavily on goals and improvements makes it easier to engage faculty, and puts the emphasis where it ought to be: taking action.

2 comments:

  1. David,

    You seem to be confusing the use of the term "assessment". This is not an uncommon problem in the area of assessment. The process you described in 4 steps is a typical assessment process - program assessment. In the process of assessing a program, it will be necessary to use some evidence collected from the students, often using an assessment instrument like an exam - student assessment.

    This is a poor choice of wording on the part of the education field to use the word assessment in two ways. It has created much confusion for those still in the fox holes as you put it. Maybe a new word can be coined that can be used for programmatic assessment that does not carry the same confusing and negative connotation. Here is to hoping...

    David

    ReplyDelete
  2. Hi David--great to hear from you & thanks for the comment. The word 'assessment' does get put to many uses, and it seems to have become shorthand for all things related to goal-oriented activities. I would say that the four steps above are expected of individual outcomes as well as program assessments, but that may not be what you mean by the second interpretation.

    It's probably too late now to fix it--there are shelves full of books about 'assessment,' and to be sure that narrow focus is useful, just not the main point. Maybe something like "the professionalization of teaching" would sell better. That's essentially what it is--trying to turn the anarchy of 'everyone for himself' teaching into something that is not standardized, but professionalized.

    ReplyDelete