Thursday, April 02, 2009

Why Documenting Learning Outcomes is Hard, Part One of Infinity

Pat Williams recently asked if anyone is closing the proverbial loop on her blog. Quote:

For almost all of us the question, "What are you going to do about it?" has proven the toughest to answer.
If you've ever written assessment reports yourself, or had to judge ones others have written, you'll probably find it hard to disagree with this statement. When I first confronted the task of documenting those for our SACS report, it became obvious that there were some institutional disconnects at the accreditation level. For example, if you look back at the old SACS general education requirement (3.5.1), it was written as a minimum standards policy (you must set standards and show that students meet them), but in practice, every single person I talked to at the SACS conferences used a continuing improvement philosophy to actually judge compliance! This included IR people who'd been on many campus visits as well as senior vice presidents of the Association speaking in formal presentations. I found this bizarre and almost Kafkaesque. I pointed this out in emails when SACS invited comment. I don't know how my suggestions were received, but the standard was changed.

The point here isn't really about SACS. It's about the confusion surrounding standards-based vs. continuing improvement models. Public schools are primarily standards-based in their approach. Results are determined by standardized tests. If the learning objectives are simple enough (in the formal sense of complexity, which I've written about here many times), this approach should work well enough. This is because the items on the test can correspond pretty precisely to the material being taught. For example, if we want to know if third graders know their multiplication tables, this is relatively straightforward.

Things go wrong when the assessment doesn't align with the actual goals. This is a question of validity of the test, but it often goes unnoticed. By believing too much in the wrong assessment, we can create what I call degenerate assessment loops (more here). You can read a nice parable about this in The Well Intentioned Commissar.

Continuous improvement is orthogonal to standards. If one mixes up standards-based assessment and continuous improvement, confusion quickly sets in. Even in the SACS 3.3.1 standard of old, there was ambiguous language (I complained about that here, and it changed too, but it's probably coincidence). The problem is that many people assume that you should be able to show continuous improvement in some particular metric. This is virtually impossible to do once learning outcomes have reached a certain level of complexity.

We think of many assessment situations as falling into one of these categories:
  • Summative and transparent An example of this is low-complexity tasks like learning multiplication tables. It's easy to judge how things are going, and relatively easy to effect change (that's the transparent part). One can apply standards here effectively.
  • Summative and Opaque Here it's easy to see where you are, but hard to know how to get where you want to go. An example of this is the stock market. In learning outcomes assessment, this category applies to standardized tests of complex behaviours, which necessarily reduce the complexity (and hence validity). It's easy to see if the numbers go up or down, but hard to know how to affect them (without, say, directly teaching students how to take the test). Standards can be applied here, but it's very hard to use them for accountability, since there's no guaranteed way to reach new heights of achievement.
  • Formative and transparent In this case, subjectivity is valued more than objectivity, and standardization gives way to things like portfolio review, interviews, and other "fuzzy" types of assessment. This is most appropriate for complex outcomes like analytical thinking or creative writing. Transparency, or the ability to turn results into effective action, depends much on the specificity of the outcome. In other words, reducing the complexity to something manageable. For example, a speech instructor giving advice to a student who has just delivered his first performance. Rubrics can help identify simplifications, but shouldn't be thought of as a panacea. It's not always true that the whole is the sum of the parts. For example, numerically summing up subjective ratings on a rubric to combine to some whole is really not defensible as science or epistemology. Continuous improvement models work in this situation; standards-based are less effective.
  • Formative and opaque These are the hardest nuts to crack. I'd say assessing 'critical thinking' falls into this category. I recommend rethinking the problem, and redefining one's mission if you find yourself unable to come to grips with a problem even subjectively. Neither standards-based nor continuous improvement are going to help you out here. Abandon all hope.

Stay tuned for part two. [update: Part two is here]

No comments:

Post a Comment