I have to wonder why the emphasis was ever put on assessment in the beginning? It's certainly a bad word to build a PR campaign around, like trying to advertise lamprey on the menu. Anyway, assessment isn't the point at all. This has only recently occurred to me, which may be symptomatic of all those sedimentation layers from said rumor enhancement facility that piled 'assessment' on top of 'measurement' and so on and on. You could make chalk out of the stuff if you had the patience.
When I finally made the synaptic leap, it was a shock. But perhaps I can be excused that because of the continual sustained emphasis on assessment in conferences, publications, and public discourse.
Assessment is one component of a theoretical institutional effectiveness process, but it's not the most important one. The spiral of excellence is to be climbed by:
- Setting goals
- Finding assessments for same
- Gathering data and analyzing it
- Making changes that may improve things
Compounding the premier position of assessment is the way we sometimes talk about the activity, as if we were doing science in order to do engineering. What I mean here is that I characterize science as pinning down cause and effect, whereas engineering is the use of those principles to accomplish some aim. This is all mixed up in learning outcomes assessment, because it's often expected to do both at the same time. Here again, the heavy emphasis on assessment gets in the way. To find a link between cause and effect, we have to vary starting conditions, and compare ending conditions. Then we have to hope that the universe is reasonable and lets us get away with inductively assuming that because it worked last time it will work again the next time.
Suppose we want to test some variations of fertilizers on corn to see which one works best. We'd try to keep everything constant except our treatment (type of fertilizer), which varies in some suitable way. We'd be careful to randomize over plots of ground to average out soil or topographical variance, and so on. We would carefully document all the conditions (water, acidity, parasites, etc.) during the experiment, and finally assess the yield at the end. Now all that stuff I just reeled off is part of the experiment, but with learning outcomes we typically wish everything away except for the last bit--assessing the yield. I have yet to see an effectiveness model that has a box asking for the experimental conditions during the teaching process. Maybe that's because few would do it.
The effect of this casual approach to experimentation is to make the assessment less valid. This may be remedied to some extent by bringing back into light the context, which is exactly why having teaching faculty involved with broad discussions that include but are not limited to assessment results is so important. Given the typical (and understandable) lack of rigor for most learning assessments, the results are more like a psychic reading than science. And that's fine. It should be told that way, so that faculty members don't get so stressed out about a "scientific" use of the results. Focusing more heavily on goals and improvements makes it easier to engage faculty, and puts the emphasis where it ought to be: taking action.