Sunday, April 26, 2009

Part Five: Creating Do-Gooders

Why Assessment is Hard: [Part one] [Part two] [Part three] [Part four]

Well, this is ironic. I was trying to come up with Plato's quote about men wanting to do good, and only needing to learn what that is. That was supposed to be the foil for some engaging line of thought. What I found was this. It's a service that writes essays for (I presume) college students for $12.95 per page. Their sample happens to be about Plato and Aristotle. Quoting from the essay:
Plato says that once someone understands the good then he or she will do it; he says “...what we desire is always something that is good” (pg.5). We can understand from this that Plato is saying individuals want to do good for themselves; we perform immoral deeds, because we don’t have the understanding of the good.
The existence of the quote where I found it pretty much negates its premise. And here I'd planned to make an argument that resistance to assessment by teachers is caused by them not knowing the Good. Well, I shall forge ahead anyway, and you can play along.

Even when something is clearly Good, it's not obvious that everyone will do it. Otherwise everyone with the means would eat plenty of fruits and veggies every day instead of fried pork rinds for breakfast, or whatever it is that leads to so much heart disease. So there's certainly the issue of how hard it is to do the right thing. But maybe we can take those two parts as necessary conditions to, for example, get teaching faculty to implement all of the beautiful assessment plans that have been cooked up.

First, the Good. If Professor Plum doesn't buy into the idea that this whole outcomes assessment thing is worthwhile, the project can only proceed through wesaysoism, which calls for continual monitoring and tedious administration. Administration is, of course, a hostile word to many faculty, so it's best if the message is delivered from one of their own. If the assessment director isn't in the classroom mixing up his or her own assessment recipes, the project is suspect. But this is only the first step. After all, faculty members have even more crazy ideas than administrators do--it's almost a prerequisite for the job (speaking as one, here).

No, you have to be convincing. There is a certain amount of chicken and proto-chicken here--you really need a program or two that does a good job so that you can prove that the idea can actually be carried out. If you start from zero, then the first priority is to find a spot of fertile ground and begin to cultivate such a program. Plan on this taking years. There are some natural alliances here, in unlikely places perhaps. Art programs already have assessment built in with their crits, as do creative writing programs. Finding a champion or two among the faculty that others respect is key. You can tell who's respected by who gets put on committees.

Unfortunately, the enterprise of assessment is a lot harder in practice that it looks on paper. So having a solid philosophy can help enormously. By this I mean picking your way carefully through the minefield of empirical demands and wesaysoism to find a path the others can follow. If the goals you set are too demanding of science, you'll fail because assessment isn't science. We don't actually measure anything, despite using that language. More on that later. As a result, if "closing the loop" means to you a scientific approach of finding out what's the path to improvement and then acting on it deterministically, it will be like trying to teach a pig to dance: frustrating to you and annoying to the pig.

On the other hand, as we've already noted, relying simply on wesaysoism to get things done means you have to micro-manage every little thing, and the faculty will try to subvert you at every turn. So doing the work of sorting out for yourself how to make a convincing argument, based on cogent principles, is worth it. Read what other people have to say. You might check out Assessing the Elephant for a divergent view. But find something that makes sense.

I find that it helps to separate out thinking about classroom level assessment and what we might call "big picture" assessment. The former should and can be indistinguishable from pedagogy--the integration of assessment directly with assignments and how stakeholders view them. As an example, we weren't happy with students' public speaking skills, so we started videotaping student seminar presentations, and having them self-critique them. It's not rocket science. But it wouldn't have happened if we hadn't explicitly identified effective speaking as a goal, and thought about what that means. And it seemed like a Good thing do to.

Big-picture assessment is extremely easy to do wrong, in my opinion. I think lots of low-cost subjective ratings are a good approach, but opinions will vary. In any event, don't imagine that it's easy to accumulate what happens in the classroom and make it applicable to the university. It's very difficult. Again, have a solid philosophy to back you up. Otherwise you'll be waving your arms around, trying to distract your audience from the logical holes.

In both micro- and macro-scale assessment, try to feed the faculty's own observations back to them in useful summary form (not averages--they're not much use). They don't respect anyone so much as themselves.

Being Good isn't enough. It also has to be easy-peasy. Lower the cost of assessment to the minimum realistic amount of time and energy. Don't meet if it can be done asynchronously. Use Etherpad instead. More generally, use technology in ways that simplify rather than complicate the process. It's all well and good to have a curriculum mapped out in microscopic detail, with boxes and rubrics for every conceivable contingency. But if no one uses it because it's too complicated, it's moot. The barrier to completion may not have to be very high to be effective. Page load times matter. A few dozen milliseconds is enough to turn away customers. Too many clicks, or poorly designed interfaces shed more users. It shouldn't be a death march to enter data into the system, however you do it.

I won't recommend a commercial system here, because I'm not an expert on them, and I also think you can create your own without that much trouble. You just need one decent programmer and a little patience. Again, philosophy is key to building the thing. Whether your system is paper or electronic or smoke signals, think about what the maximum useful information per effort is. It's easier to start small and grow than the other way around.

As a real example, a portfolio system I built for a client started off as the absolute bare minimum--it was really just a drop box, but with students and class sections conveniently set up automatically. Over time we added assessments to the submissions, customized per program. It would have been too much to start there. Remember the only way to solve opaque problems is an evolutionary approach.

Satisfying all the critics isn't easy. Accreditors want different things than the senior administration, which will be yet different from what the faculty find useful. For the former, sad to say, rather meaningless line graphs showing positive slopes of some kind of outcome is usually enough. This is the kind of thing that looks good, but probably doesn't mean much. So there is always the temptation to simply play the game of meeting those (very minimal) expectations. Don't do that, or you'll find yourself wondering why you didn't choose exotic plumbing as a career instead.

Convince the faculty, and everything else Good will follow. It's easy to make pretty graphs. It's much harder to lead an ongoing conversation that your very intelligent colleagues find convincing and insightful. And if you find yourself in trouble, you can always show them the website selling term papers. That should be good for an hour's distraction at least. Meanwhile you can slip out the back and work on your oratory or figure out a way to shave half a second off of page refresh times.

Next: Part Six

No comments:

Post a Comment