- So oft in theologic wars,
- The disputants, I ween,
- Rail on in utter ignorance
- Of what each other mean,
- And prate about an Elephant
- Not one of them has seen!
Writing instructors need to do more to prevent colleges from adopting tests that only narrowly measure students' writing abilities, several experts said last week at the annual meeting of the Conference on College Composition and Communication.I've used such simple measures myself. An administrator was desperate for a quick solution to assessing writing quality, and had the English faculty come up with a simple exercise to test all of our students. Each was given a paragraph with errors to correct. The errors ranged from their/there sort of things to incorrect parallel structure. Several faculty members scored the results, including myself. One notable phenomenon was that students responded in ways that showed the were trying to answer the 'correct' way and not the natural way they spoke and wrote. A good example in the "real world" is The Clash song "Should I stay or Should I go?", which includes the line:
The underlining is courtesy of MS Word's grammar checker. There's actually a website praising the band for their correct use of grammar. I've scratched my head trying to figure out who's right in this case, but what's clear is no one would actually talk that way unless they were trying to be 'correct.' In Assessing the Elephant I write about what I call monological definitions--those imposed by an outside system and not subject to debate. Grammar rules are like that. So people act differently when they think they will be judged on the 'right' answer rather than the one they'd normally give. This is bound to bias the results on any simple test like this. This is just one tiny example of how an artificial test like this leaves much to be desired when assessing writing. Even simply measuring a student's ability to write correctly--the simplest possible assessment--is fraught with problems.
As an aside, I have to relate my favorite response on the exercise I described. The paragraph the students were asked to correct was on the topic of life at the south pole. One of the sentences was something like "The fish living under the water of the arctic are adapted to the near-freezing conditions." This student circled "under the water" and penciled in "where else would the fish live?" Indeed.
According to the quoted article, finding more sublte forms of assessing writing is a problem needing a solution:
If "subtle" doesn't include authentic, I think this will be a very difficult proposition.Charles Bazerman, the organization's chair, said in an address to members that the move toward formal writing assessment is "inevitable," and that writing instructors need to furnish the evidence and the expertise to help improve how students are judged.
"It is up to us to provide more subtle forms of assessment," Mr. Bazerman said.
The article goes on to describe how writing assessment was implemented at one university, in an ad hoc manner (emphasis added):
Initial designs for the new assessment, put together by administrators at the last minute, were met with "confusion, ignorance, even outrage" among the writing facultyThis illustrates a central conceit about assessment in general, viz, that it's easy and can be done in straightforward and obvious ways. Heck, even an administrator without training or knowledge of the field can do it (my extrapolation). Contrast this from the faculty's stated opinion, taken from the article:
[G]rowth in writing is complex, slow, often nonlinear, intrinsically contextual, and not necessarily immediately visible.Context is key to authenticity. Writing a lab report is different from writing a poem, which is different from writing a mathematics article. Experts in those areas can judge writing samples and rate them under the right conditions, but just because Stanislav can write a decent math paper does not mean that he can write a good review of a lit crit article. This is so obvious that it hardly bears mentioning, except that this kind of mistake gets made over and over.
I've used the following example before, but it serves well to illustrate the point. A supervisor is generally capable of judging whether employee Stanislav does a 'good job' at executing his responsibilities. Can we therefore make the leap to assume that there is a general test we can construct for someone doing a 'good job'? This test would have to work equally well for, say, a nuclear engineer, an airline pilot, a stock trader, and a science teacher. Obviously not. I had a math professor who called the subject of mathematical Topology "generalized silliness." That's a good description of what a general test of 'doing a good job' would be. Ditto for a general test of 'good writing' or 'critical thinking.'
The power of induction (generalizing from examples to create categories and rules) is powerful and easy to misuse. Just because there is 'writing' and there are 'tests' and there is 'exellence' does not mean that there are general tests of writing excellence. To assume that such a thing exists and proceed on that basis is, well, silliness of a general kind.
No comments:
Post a Comment