Tuesday, August 31, 2010

Value-Added Assessment

"Value-Added" is becoming a meme. The LA Times recently went so far as to publish names and rankings of public school teachers based on the methodology. Teachers reacted with predictable affront:
"It is the height of journalistic irresponsibility to make public these deeply flawed judgments about a teacher's effectiveness," said a statement issued by United Teachers Los Angeles.
There has been plenty of other public discourse on the subject. A piece from the Washington Post entitled "Study blasts popular teacher evaluation method" calls attention to a study on the usefulness of so-called value-added measurement, which has been used to assess teacher performance, for example. Policy makers seem to like this idea, probably because it makes their job easier. The CLA is a college-level assessment that advertises value-added reports, although of institutions not individual teachers. Note that this study is from the Economic Policy Institute, which (according to Wikipedia) has significant funding from unions. I don't know this, but I would assume their bias (if any) would be against public rankings of teachers, no matter what methodology.

An excerpt from the article:
One study found that across five large urban districts, among teachers who were ranked in the top 20 percent of effectiveness in the first year, fewer than a third were in that top group the next year, and another third moved all the way down to the bottom 40 percent. Another found that teachers’ effectiveness ratings in one year could only predict from 4 percent to 16 percent of the variation in such ratings in the following year. Thus, a teacher who appears to be very ineffective in one year might have a dramatically different result the following year.
In other words, the assessments aren't reliable. If this is the case, it's damning. I recently did something similar with course evaluations for our university. You can see some results here. These aren't very reliable either.

It occurs to me that it should be easy to empirically test the limits of reliability of value-added by simulating student and teacher abilities, and the approximations and error induced by a standardized test.

In the assessment community of higher ed, one weather vane is the Assessment UPdate, Trudy Banta's publication at IUPUI. It doesn't seem to be available on the web, but in the last few years there have been articles critical of the statistics behind the CLA's value-added methodology. It's interesting that the Washington Post article says that:
And RAND Corporation researchers reported that,
"The estimates from VAM modeling of achievement will often be too imprecise to support some of the desired inferences...."
and that 
"The research base is currently insufficient to support the use of VAM for high-stakes decisions about individual teachers or schools."
As I recall, CLA was created by RAND, so it seems odd that they would undermine the whole premise. There's a riddle to be solved.

No comments:

Post a Comment