Saturday, January 07, 2006

What Does College Teach?

In a November 2005 Atlantic Monthly article, Richard Hersh subtitles his piece with "It's time to put an end to "faith-based" acceptance of higher education's quality". The thesis is that colleges and universities don't actually know, because they can't measure, whether or not students are learning. After raising the question in rather breathless terms, the author summarizes the situation in assessment, and then gets predictably bogged down in trying to propose a solution.

Hersh starts with the question "What makes your college worth $35,000 a year?" I suppose this was supposed to be provocative, but he could have hardly chosen an easier question to answer. He says this is a hard question for college presidents to answer. Why?

About 5 seconds' search on Google comes up with a suitable response:

Any way you measure it, a college degree is the best investment of your life. In today's dollars, a bachelor's degree is worth more than $2.1 million over 40 years. "Having that post-secondary diploma can make such a difference in lifetime earnings," said Washington, D.C.-based Employment Policy Foundation President Ed Potter.
Never mind the fact that few students pay full sticker price. This is a ROI that any CFO would fantasize about. But read a little further and you discover that Hersh really means something else. He wants a measure on learning, not worth.

[W]e have little direct evidence of how any given school contributes to students' learning.
Apparently what he's really after is a way to rate colleges after taking student quality into account. Again, you could produce such ratings by looking at salary histories of graduates, normalized by socio-economic status of the parents/guardians. But despite his original question, he's not really interested in economic worth, but improvement in learning.

We might well ask what the source of college graduate's extra income is. Is the marketplace so easily duped that a college degree represents nothing about the skills and abilities of graduates, and that the mere slip of sheepskin is sufficient to accrue the megabucks that accompany a Bachelor's degree? That seems exceedingly unlikely. If graduates were no better than non-graduates, US companies would very quickly stop paying them a premium. So even though we haven't yet discussed direct measures of learning, there certainly is good circumstantial evidence of it.

Hersh cites four current categories of measures of college quality: actuarial data (including graduation rates, admissions selectivity and the like), expert ratings by college administrators and professors, student/alumni surveys (including NSSE), and direct assessment (grades). Hersh is openly contemptuous of grades:
For our purposes [grades and grade point averages] are nearly useless as indicators of overall educational quality--and not only because grade inflation has rendered GPAs so suspect that some corporate recruiters ask interviewees for their SAT scores instead. Grades are a matter of individual judgment, which varies wildly from class to class and school to school; they tend to reduce learning to what can be scored in short-answer form; and an A on a final exam or a term paper tells us nothing about how well a student will retain the knowledge and tools gained in coursework or apply them in novel situations.

You see what I mean about 'breathless'. Where to begin? First, there are no references to support any of this, but my own experience (about 20 years in the classroom) doesn't jibe with this. Generally speaking, 'A' students are going to be better than 'D' students. I'm trying to imagine what 'short answer' means for a class on assembly language or mathematical analysis. If we accept this paragraph as an emotional outburst rather than rational argument, I think we can boil it down to this: the author doesn't find it acceptable that student GPAs aren't useful for comparing one school to another.

Of course they're not! The whole endeavor of trying to rank one college against another is daft--anybody who's ever been to college knows that some programs are better than others. What good would an average program quality be to anybody, even if you contrived to compute it? Moreover, such rankings are by nature one-dimensional. Imagine if we had to rank people--your friends and co-workers--on such a scale. Bob is a 4.5, but Mary is 7.2. Ridiculous.

Again, the article makes no mention in the list of assessments of post-college performance in the workplace, although he alludes to it disparagingly "What is worth learning cannot be measured, some say, or becomes evident only long after the undergraduate years are over."

In the following paragraphs, Hersh plays a switcheroo. After arguing that there's no good metric to compare colleges against each other, he assumes that he's concluded that even at the classroom or program level no good assessment is going on. "[C]ummulative learning is rarely measured."

Huh? Anyone who's been to a conference on education lately (SACS for example), knows that half of the sessions are devoted to this very topic. Not only are people interested in it, it's required by most accrediting bodies (all the ones I know of). After saying that cumulative learning is rarely measured, the author revises this to "[M]easuring cumulative learning hasn't been tried and found wanting: it has been found difficult and untried."

I don't know about your institution, but every academic program at mine has a capstone course sequence. For example, in mathematics, students have three semesters of independent research with a faculty advisor, during which we rate them for various skills and abilities on a scale that ranges from 'remedial' to 'graduate'. Math is like most disciplines, where without cumulative learning, you simply can't progress. How could you hope to pass differential equations if you haven't got a clue about basic calculus?

Now that the author's point has wandered off into the landscape of student and program assessment (rather than college rankings), he finds some promising approaches, including portfolios. These have been around for a long time, of course, and there are some very good electronic portfolio vendors out there (just google 'electronic portfolio'). I don't know what the statistics are on usage, but a lot of schools are using them to track student work. We built our own in 2005.

We finally reach the punch line of the article: a pitch for a product that the author is co-director of: the Collegiate Learning Assessment Project. The last two sevenths of the paper is devoted to this product. As a matter of fact, my copy of the article came bundled in an envelope with more advertisement for the CLA surveys.

So the answer to the college ratings 'problem' is a standardized test? The accompanying literature assures us that it has been proven valid and reliable. As convenient as that is, validity is not something that can be externally determined because it depends on the question you're trying to answer by giving the test. That's a subject for another post...

No comments:

Post a Comment