Friday, January 08, 2010

Assessment Committees

In planning for a review of SACS 3.3.1 compliance university-wide this spring, we had to consider how to structure the process and committee structure. After discussion, it occurred to me that there are really two different things going on, and that they might be profitably separated. One is the ubiquitous Assessment Committee, which has been useful over the last year as a kind of R&D group--thinking up ways to more effectively assess learning outcomes. One subcommittee worked on technology (eportfolios, for example), and another on results and meaning (I called it the epistemology committee). Both of these are composed of mostly faculty.

But the process of review is something else. For one thing, assessment is only one component, and arguably not even (gasp) the most important. In order for the assessments to be useful, several things have to go right. Things like thinking about what assessments would be meaningful in advance of other planning, the organizational follow-through, and use of results while bearing the big-picture in mind. To me, this sounds like a job for department chairs. So, I will see if I can get a small group of chairs to for an Academic Effectiveness Committee to compliment one on the administrative side. The Assessment Committee can still do the R&D, but the actual review of program reports will be done by the new creature.

Speaking of which, there is an interesting discussion on the SACS-L listserv (see this post) about what the standard of success should be for 3.3.1. For the non-SACS folks, this is accreditation speak for the requirement to close the loop in effectiveness planning, expressed for learning outcomes thus:
3.3.1 The institution identifies expected outcomes, assesses the extent to
which it achieves these outcomes, and provides evidence of
improvement based on analysis of the results in each of the following
areas:

3.3.1.1 educational programs, to include student learning outcomes
etc.
At the heart of the discussion is the meaning of the language in 3.3.1, which hasn't changed much since 2006, when I wrote to the authors to ask that they resolve the ambiguity (posted here).

It occurred to me that there is an odd thing about 3.3.1, if interpreted most strictly. It essentially asks every institution and every program to conduct independent scholarly research on the learning of students. To do this right is obviously impractical, so it starts to resemble China's Great Leap Forward:
Mao encouraged the establishment of small backyard steel furnaces in every commune and in each urban neighborhood. Huge efforts on the part of peasants and other workers were made to produce steel out of scrap metal. To fuel the furnaces the local environment was denuded of trees and wood taken from the doors and furniture of peasants' houses. Pots, pans, and other metal artifacts were requisitioned to supply the "scrap" for the furnaces so that the wildly optimistic production targets could be met. Many of the male agricultural workers were diverted from the harvest to help the iron production as were the workers at many factories, schools and even hospitals. [wikipedia]
It would make more sense to give programs a choice. Either they could sign up to do real research on outcomes, OR simply adopt proven techniques that resulted from actual research at a well-funded institution. Why do we need to run thousands of ill-designed experiments on the same subject in parallel instead of a few good ones, and just use the results of those? Of course, this has a dark side, since the big psychometric industry would love to lock up this business. Still, if learning outcomes is our goal, why aren't we focused more on using techniques that are known to work instead of continually trying to discover them?

No comments:

Post a Comment