By comparison, the language and ways of knowing that create popular culture is dialogical. There are no rules set down about what new words (or memes, if you prefer) will arise, and no deterministic rules that could be applied to predict cultural evolution. A stock exchange has some elements of monologism (precise definitions regarding financial transactions, for example), but the evolution of prices is dialogical--unpredictable consensus between buyers and sellers.
One of the characteristics that distinguishes a monological language from a dialogical one is that in the former case, the names can be arbitrary. What matters is their relationship in the model that's used for understanding the world. For example, electricity comes in Volts and Amperes, and its power is measured in Watts. These are names of scientists, as are the Ohm, Henry, and Farad, terms that refer to electrical properties of circuit elements. If they were dialogical names, they would more likely be "Zap," "Spark," and "Shock" or something similarly descriptive. This is because in a dialogue, it's an asset for words to be descriptive--you don't have to waste extra time saying what it is you meant. By contrast, it's enough to know that V=IR when calculating voltage in a circuit. It doesn't matter whether we call it Volts or Zaps.
It's a trope to poke fun at academics who speak in high-falutin' language just to say something ordinary. When Sheldon in Big Bang Theory gets stuck on a climbing wall, he says "I feel somewhat like an inverse tangent function that's approaching an asymptote," which is then reinforced by his desperate follow-up "What part of 'an inverse tangent function approaching an asymptote' did you not understand?" [video clip] Some might argue that some academic disciplines that are inherently more dialogical use language that's unnecessarily opaque. This point was publicly made in the "Sokal Affair," where a scientist submitted a jargon-laden meaningless paper to a humanities journal as a hoax, and it was published.
Using Rubrics for Assessment
In order to connect these ideas to the assessment practice of using rubrics, let me first review what they are.
The term "rubric" in learning outcomes assessment means a matrix that indexes competencies versus accomplishment levels. For example, a rubric for rating student essays might include a "correctness" competency, which is probably one of several on the rubric. There would be a scale attached to correctness, which might be Poor, Average, Good, Excellent (PAGE), or one tied to a development sequence like "Beginning" through "Mastering." In our Faculty Assessment of Core Skills survey, we use Developmental, Fresh/Soph, Jr/Sr, Graduate to relate the scale to the expectations of faculty.
A rubric alone is not enough to do much good. A fully-developed process using rubrics might go something like this, starting with developing your own.
- Define a learning objective in ordinary academic language. "Students who graduate from Comp 101 should be able to write a standard essay that uses appropriate voice, is addressed to the target audience, is effective in communicating its content, and is free from errors."
- The competencies identified in the outcomes statement are clear: voice, audience, content, and correctness. These define the rows of the rubric matrix.
- Decide on a scale and language to go with it, e.g. PAGE.
- Describe the levels of each competency in language that is helpful to students. It's better to be positive than negative--that is, define what you want, not what you don't want when possible. There are many resources on constructing rubrics you can consult. The AAC&U's VALUE rubrics are examples to refer to.
- The rubric should be used in creating assignments, and distributed with the assignment, so the student is clear about expectations. Use of rubrics in grading varies--it's not necessary to tie an assessment to a grade, but there are some obvious advantages if you do.
- Rating the assignment that was designed with the rubric in mind should not be a challenge. If it's essential to have reliable results, then multiple raters can be used, and training sessions can reduce some of the variability in rating. Nevertheless, it's not an exact process.
- Over time you create a library of samples to show students (and raters) what constitutes each achievement level.
Note that the way to do this is NOT to take some rubric someone else has created and apply it to assignments that were not created with the rubric in mind. That's how I did it the first time, and wasted everyone's time.
Rubrics as Constructed Language
The learning objectives that rubrics are employed to assess are often complex, so even though an attempt is made to define the levels of accomplishment, these descriptions are in ordinary language. That is, there's no formal deductive structure or accompanying model that deterministically generates output ratings from inputs. Instead, the ratings rely on the judgment of professionals, who are free to disagree with one another. If your attitude is that there is one true rating that all raters must eventually agree on, you're likely to be frustrated. One problem is that although the competencies, like content and correctness in the example, are not independent. If there are too many spelling and grammar mistakes on a paper to gain any sort of comprehension, content, style, voice, and so on are also going to be degraded. One rater of writing samples I remember was adamant that a single spelling mistake implied that all other ratings would be lowered as well.
So using rubrics is dialogical, but by the way of a nice compromise. The power in rubrics comes from restraining the language we use to describe student work, according to a public set of definitions. Even though these are not rigorous, they are still extremely useful in focusing attention on the issues that are deemed important. In addition, rubrics create a common language in the learning domain. It's important for students not to just know content, but how professionals critique content, and rubrics are a way to do that. They can be used for self-reflection or peer review to reinforce the use of that language.
The advantage of generating useful language is one reason I only use a PAGE scale as a last resort. Terms like poor, average, and so on are too generic, and too easily made relative. An excellent freshman paper and an excellent senior paper should not be the same thing, right? Bad choices in these terms early on can have long-term consequences when you want to do a longitudinal analysis of ratings.
There is a tendency among some to view rubric rating as a more monological process, but I can't see how this can be supported for most learning outcomes. In my opinion, they are most useful in creating a common language to employ in teaching, to rein in the vast lexicon that might naturally be used and focus on the elements that we agree are the most important. This has positive benefits for everyone concerned.
No comments:
Post a Comment