The students took the CLA, a ninety-minute nationally standardized test, during the same week in which faculty members assessed students’ e-portfolios using rubrics designed to measure effective communication and critical thinking. In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years. The faculty assessors were trained and their rubric assessments “normed” to ensure that interrater reliability was suitably high.One administrator's conclusion about the mismatched scores is that:
The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.“When we talk about standardized tests, we always need to investigate how realistic the results are, how they allow for drill-down,” Robles says. “The CLA provides scores at the institutional level. It doesn’t give me a picture of how I can affect those specific students’ learning. So that’s where rubric assessment comes in—you can use it to look at data that’s compiled over time.”You can find a PowerPoint show for the research here. Here's a slide taken from that, which summarizes student perceptions of the CLA.
Here are the correlations in question:
It's hard to see how anything is related to anything else here, except maybe breaking arguments and analytical writing. I would conclude that the CLA isn't assessing what's important to the faculty, and is therefore useless for making improvements. Since UC is part of the VSA, they can't say that. Instead they say:
The CLA is more valid? [choking on my coffee here] Valid for what? Saying that one school educates students better than another school? The two bullets above seem Orwellian in juxtaposition. How can an assessment be valid if it isn't useful for student-level diagnostics? Yes, I understand the the CLA doesn't give the same items to each student, that it's intended to be only used to compare institutions or provide a "value-added" index, but the fact that cannot be escaped is that learning takes place within students, not aggregates of them. At some point, the dots have to be connected between actual student performance and test results if they're going to really be good for anything. Oh, but wait: here's how to do that.
By the way, if you don't know the story of Cincinnatus, it's worth checking out.
No comments:
Post a Comment