At the heart of the method Jody constructs is the notion that we need to make room in our assessments for uncertainty. Or to reverse the idea, the confidence a student has in an answer is important. From the article:
[T]raditional scoring, which treats students' responses as absolute (effectively a 0 and 1 based probability distribution), begs the question: Is a student's knowledge black and white? How can a student express belief in the likelihood that an alternative may be correct? Further, how can a student's ability to carry out a process be traced and evaluated? Addressing these questions requires going beyond traditional multiple-choice testing techniques.A couple of days ago I wrote about Ed Nuhfer's knowledge surveys, which approximates student confidence in subject material with a survey. Jody's idea extends this to a testing environment. Obviously there are differences between surveys and tests. One might expect students to be honest about their confidence in a survey, or perhaps underestimate it slightly, because they may see it as affecting the review and test itself. On a test, a student has nothing obvious to gain by admitting uncertainty. That changes if "near-misses" are partially rewarded. This is like partial credit on a pencil and paper test. But how can one indicate such subtleties on a multiple-choice test?
Dr. Paul's solution is to create software that allows rich responses from test-takers. A schematic of the interface is shown below, annotated with meanings of the various zones of response.
The response mechanism allows students to waffle about their answers. The analysis in the paper of different weighting strategies is quiet detailed (and mathy). It raises and attempts to answer interesting questions about multiple-choice questions, and the idea of rewarding partial knowledge.
No comments:
Post a Comment