Thursday, May 21, 2009

The Higher Ed Buffet

An enthymeme implicit in any solution to the hard assessment problem is that education is a commodity. That is, if we could truly pin a meaningful number on the value-added (or even absolute accomplishment) of graduates from respective institutions of higher learning, and could rank colleges and universities in a scientific way, then we have abstracted all most every distinguishing detail away from the college experience. Students are uniform raw materials for the industrial maw of enlightenment, and the output comprises finely packaged standardized brains, weighed and bar-coded and packed in bubble-wrap, ready for shipping. Employers and graduate schools could simply mail-order their inputs from, and an efficient market would quickly find price equilibrium.

This belief in wholesale data compression of the multitude of products delivered by any college into a single number is a staggering arrogance that ignores what Susan Jacoby calls "[T]he unquantifiable and more genuine learning whose importance within a society cannot be measured by test scores and can only be mourned in its absence." (The Age of American Unreason, pg. 149)

It's truly hard to imagine that people actually believe this absolute data reduction is possible, but it's at the heart of attempts like the CLA to compare "residual" differences in learning outcomes across institutions, and is evinced in comments like the one I quoted from "Steve" last time from InsideHigherEd:
20 years from now: Consumer Reports will be assessing the quality of BA degrees, right along side washing machines and flying-mobiles. Parents will ask, "why should we pay 3 times the cost when Consumer Reports says that there is only a 2% increase in quality?!"
Here, a second assumption compounds the first, viz, that college ratings easily translate into worth in dollars. I scratch my head over this sort of thing, which also came out of the Spellings Commission. If what we really care about is dollars, then why not just focus on salary histories of graduates? The US government already publishes volumes of reports on such things a average salary of an engineering graduate. Why not add one more dimension, so that the school issuing the diploma can be identified?

One valid reason why the learning = cost equation doesn't work is articulated by an anonymous commenter to today's article on the future of higher ed costs in InsideHigherEd:
Wealthy institutions, such as the small elite liberal arts colleges which charge over $50,000 in comprehensive fees, and private elite universities know that keeping prices high is the surest way to attract the wealthiest customers who will also become future donors. This is allays the motivating factor at my institution, where the president is always public about staying among the elite by charging high tuition and by regularly raising tuition above 6%. "we have to remain at the mean of our peers" is the justification.
I remember exactly this kind of conversation with institutional researchers at a round table discussion a couple of years ago. One elite college was raising rates dramatically year after year to "catch up" to the competition. Is it worth it? Is the Harvard experience worth more because of the contacts you will make? You bet it is. How is that going to be measured with a standardized rating system?

This suggests a kind of Red Queen Race among top institutions, fighting for the best students of the top socio-economic strata. I've argued before that many more institutions in addition to the top tier are affected by this treadmill, and those who suffer the most are highly talented, highly motivated students who don't have the right credentials to get admitted into the club, or get admitted but with insufficient aid. That is a real market inefficiency that can be partially addressed with non-cognitive assessments.

A few weeks back I found myself down town looking for a take-out lunch. I was in a hurry, and the lunch crowd had descended, creating long lines at the sort of place I'd normally eat at. I finally found one place with no lines. It quickly became apparent why that was the case--the cheapest thing on the menu was $17, for some vegetarian "delight" sort of thing. The place was plush, quiet, and refined. Maybe the few patrons were there because of the food, but it seems to me that they were paying for exclusivity as well--no bustling hoi palloi to disturb their cogitations on credit swap derivatives (this was in the banking district). Maybe this is a good place to meet future clients.

Hard assessment sees a college as a factory. Instead, I think the comparison to a restaurant is more appropriate. To see this, imagine applying hard assessment to all of the eating establishments in your locale. The product of this exercise would be a listing of all the eateries with a number denoting the value-added of each. Note that this is not a score from the health department certifying that the kitchen is clean--that's all low complexity stuff. No, our hard assessment must take into account the rich experience of dining, and produce a single number that indicates with scientific precision the performance of the establishment. If you want to take it a step further, you can add the assumption that this metric must be comparable to dollars in some way, so that higher ranked restaurants can charge higher prices.

You can have fun with this analogy: the catalog of programs as a menu, the demographic served as the clientele, institutional aid = coupon clipped discounts, professors prepare the culinary products, and so forth. No analogy is perfect, but the advantage is that most of us have direct experience with a small number of colleges or universities, but with a large number of restaurants. If anything, it ought to be easier to do hard assessments of restaurants than it is colleges. (Please note, I'm not talking about "one-to-five stars" type assessments prepared by city guides. They make no pretentions to be scientific.)

In order to build our assessment, we'd have to start worrying about what are the most important outcomes of the dining experience. Is it customer satisfaction? Or rather the health benefits of the food? Or perhaps the ratio of calories to dollars spent? Then we must tackle the problem of how to average across what are really qualitative differences. How do we compare a fish-lover's opinion of the tuna and ginger plate with the customer who just discovered she's allergic to ginger, and had it sent back in favor of a hamburger? How much can we rely on self-reported ratings by customers? Do we take into account the kind of customer who normally eats there, or do we try to randomly sample the population? If the final assessment is to be a single number rating for the restaurant, how do we weight each of these components? (If the answer to that is "we'll use factor analysis," then how do we subjectively decide what the primary dimension actually means?)

If this seems like a task that is impossible to do while keeping a straight face, it is. We will quickly abandon science and have to make subjective decisions about the design of the grand assessment in order to come up with anything at all. I encourage you to actually try this thought experiment, using the eateries you frequent as your raw material. Remember that the the numbers you assign have to be meaningful to other people, not just yourself; solipsistic assessments aren't publishable in Consumer Reports. As a final requirement, assuming your ratings are taken seriously--now you have to figure out how to keep restaurants from "gaming" your rating system to artificially increase their scores. Good luck.

No comments:

Post a Comment