Tuesday, October 20, 2009

Absurdities

I went to a workshop yesterday on rules I should know and came back with three pages of notes and two pages of ideas about a research project. It was a nice drive too. Mrs Muffet (our GPS, who sits on a tuffet) took me back home by way of a mountainous stretch of I-40, which was more excitement than I needed without coffee or a nap to fortify me. I made some new acquaintances, which is always the highlight of such trips. Getting there was a trip in itself, logically. It went like this, in a series of emails to the organizer, who I'll call Mrs. K:
Me: Hi. Our copiers are being replaced, and I can't fax the form, so my information is enclosed.

Mrs. K: Okay, but I need the fax.

Me: Our machines are still wonky. I've scanned the completed form and attached it as pdf.

Mrs. K: Can you send me the fax?

Me: I was hoping you could just print the pdf...?

Me (day before the meeting): Hi. I haven't got confirmation yet. Am I on the list?

Mrs. K: If you send me the fax we can get you on today.
I sent the damned fax.

While I was driving I amused myself with a riff on a topic from an ASSESS post. Someone had asked about measuring the consequences of values and beliefs. The discussion was very useful I'm sure, but the use of 'measurement' in assessment makes me itch, and I imagined the following dialogue at an imaginary Lenin School of Management:
Stani: How did I do on my test?

Vlad: Ah...not good at all, I'm sorry to say.

Stani: What happened???

Vlad: You don't believe in the precept that the means justifies the ends.

Stani: Why would you think that?

Vlad: We measured your belief. Less than 10%: shameful.

Stani: But I do believe that the ends justifies the means.

Vlad: No, you don't. You believe that you believe it, but you don't actually believe it.

Stani: Your measurement shows that too?

Vlad: Verily.

Stani: And you believe these measurements?

Vlad: Of course. Here is my score--see: 100% belief in the measurements.

Stani: But what if the measurements are wrong?

Vlad: I don't believe the measurements are wrong. I showed you my score.

Stani: But if the scores are wrong, then the measurements don't mean anything,
regardless of what you believe.

Vlad: I don't believe that to be the case.

Stani: So if the scores told you that you believed in the tooth fairy, you would believe them, even if you don't believe in the tooth fairy?

Vlad: I can believe that I believe in the tooth fairy without believing in the tooth fairy. Just like you can believe that you believe that the ends justify the means, even if you don't really believe it.
This seems rather like a Monty Python script. On the other hand, there is a lot of believing in the scores. As evidence, I present iCritical Thinking(tm) test, which aims to solve this problem:
Today's academic and professional environments demand more than current Internet and computer skills. They also require students and employees to navigate, critically evaluate and make sense of the wealth of information available through digital technology.
In college, we call it information literacy. At least, this is pretty close, although it involves using office applications to negotiate "scenarios." This isn't ambitious enough to market, apparently, because the presentation on the website blurs the lines between nuts & bolts online skills and "critical thinking," whatever that is. The marketing material uses quotes like this one attributed to Pres. Obama (you can find the original here):
The solution to low test scores is not lower standards; it's tougher, clearer standards... I'm calling on our nation's governors and state education chiefs to develop standards and assessments [that measure] 21st century skills like problem-solving and critical thinking..."
Another one targets employers, and uses a quote from The Economist (Oct. 5, 2006):
The nature of the economy is changing. It's putting more and more premium upon intellectual skills, analytical skills, creative skills, which are in short supply.
The implication is that this company can certify that you have these valuable and complex skills. There's plenty more, including the usual claims of impeccable validity (for what isn't exactly addressed; see "Questions of Validity"). Saying a test is valid is like saying a tool is useful. Useful for what? Changing the oil filter or pounding nails? Or just weighing down that big stack of meeting minutes on your desk? The construction of the test material is likewise celebrated (here):
The development of the Global Standard 3 took over 9 months of research, data collection from 400+ subject matter experts from over 30 countries and final ratification from the members of the Global Digital Literacy Council. By going through this rigorous process it ensures that Global Standard 3 is not only current and relevant, but a true global digital literacy standard.
So this truly is a one-size fits all test, conveniently administered in an hour, that can be used from high school through career path, through the entire world of 6000+ spoken languages and innumerable cultures, to assess digital literacy (but we want you to think critical thinking).

This is of the same species as the "measuring belief" absurdity above, and represents an insidious problem. The song goes like this (to the tune of "Goodbye Yellow Brick Road"):
  1. There is increasing demand for certification of skills due to the diversity of educational experiences. See "Scaling Higher Ed" for more.

  2. There is also increasingly desperate and more insistent demands from the government(s) for "measurable standards."

  3. Beyond low-complexity skills, it is difficult or impossible to construct valid instruments that are very general in nature (i.e. amenable to standardized testing). For broad-spectrum skills like critical thinking, validity is a slippery concept.

  4. Apply Zog's Lemma--where error with economic value exists, it will be sold. Who better to do that than standardized test makers?

  5. Apply something like Gresham's Law: bad assessments drive out good assessments. This amplifies the largeness of the mistake by industrializing and institutionalizing tests and scores of dubious validity. It creates a monological definition of what "thinking" is, viz., "thinking" means a score on a standardized test. Think of the outsized value placed on SAT scores.
You can see all of these except the last one evidenced in the iCritical Thinking materials. There's little in the way of modesty about the claims. And who's to prove them wrong? Certainly, the business model is a good one: the next thing that comes along is "prep materials" for the test (see "Closing the Loop CLA-style"), and possibly end-to-end control of the curriculum, coaching, and test.

There are already plenty of information literacy programs and tests out there. TILT (now under re-development, apparently) is the one I've heard librarians mention the most. But it doesn't come with slick marketing materials that claim ultimate relevance.

I have to wonder what professional psychometricians think about all this. This industry is theirs, the unfortunate language of "measuring" mental traits is theirs too. There are serious attempts to come to grips with some of the fundamental problems, like Bennett and Hacker's Philosophical Foundations of Neuroscience. And there are plenty of academics in the field who work hard at validity and don't inflate claims. But where is the general hue and cry, the expert advice to lawmakers that, no, we don't really measure minds, and any standardized test is bandwidth-limited simply because it's standardized. I don't see it. Maybe someone can point me in the right direction.

(photo from this site)

No comments:

Post a Comment