All this by way of introduction to my topic: how to judge applicants for admissions purposes? Academic preparation isn't enough if we care about things like motivation and perseverance. I've thought about and written about different angles of this topic for a long time, and now find myself in a position to do something about it.
The conventional wisdom, if there is such a thing, is that grades and SAT are reasonable predictors of achievement. These are called cognitive measures for reasons best known to the psychometricians. As I recall high school, there was a lot more to grades than cognition, but never mind. SAT is a favorite target for those who don't like this narrow thinking, and I'd count myself with the critics. I took the ACT in 1980 in Illinois, and remember liking standardized tests then. I think I got a 28 on the ACT, whatever that means. I also took the ASVAB military battery (in the sense of tests, not artillery), and actually found the results last summer when I was going through stuff in the garage. I remembered taking the test, driving to a National Guard armory in East Saint Louis with my friend Mark, following the lines marked on the floor to the various stations. One was an attitudes and behavior survey, where they discovered that I'd never smoked pot. They made fun of me for that, or else it was some kind of awe. This was 1978, and the stuff was everywhere. One station was the ASVAB, complete with number 2 pencils.
In the Army's eyes, I was being rated for potential by this test, and they took it very seriously. Frankly, I loved taking standardized tests because I always did reasonably well on them, and hence had no stress about the results. What's interesting about the ASVAB is the one domain where I did not do well at all. You can see on the image that there's a 55% bar in the middle. When I pulled this out of the box in the garage last summer I had to squint to read the faded type of the explanation. If you look on the far left, there's a CL designator: that's for "clerical work." The battery decided I'm no good at it. If I recall correctly, this part of the test included such things as counting how many Cs were in a line of Os. Something like this:
OOOOOOOCOOOOOOOOOOCOOOOOOOOCOOOOOOOOOOOOOOOOOOOOOOOOO
That's the sort of thing that drives me crazy. I will tip my hat to the test designers: I am in fact not well suited for mindless clerical work. Did the rest of the battery rate me accurately? I guess that depends on what the expected outcomes were. I wasn't at that time very good leadership potential, was a bit lazy academically from never having had to work hard in school, and didn't have a lot of ambition to go change the world. Yes, I was only 16 or so, but looking at the test results would still have vastly overestimated what kind of officer I would have made at that point. In short, there were a lot of imporant things that the test didn't measure. This anecdote underlines my main thesis about standardized tests: they are good for simple tasks but not complex ones. And there are many complexities to what makes a succesful college graduate. The specifications are likely different from institution to institution as well.
All this by way of introduction to an article I found while researching the link between SAT and race. The article from InsideHigherEd by Scott Jaschik dates from September 2008, and opens with:
[C]ritics of standardized testing — and especially of the SAT — have said that these examinations fail to capture important qualities [...]What's interesting is that the College Board--creator of the SAT--agrees, and has an active research program for developing new approaches. This is good news and bad.
First it's good, because the old SAT "cognitive test" will lose some of its halo and the market place for talent can become more efficient. This is good for all of us. It's also good because it will put standardized testing--the modern day equivalent of phrenology, in my opinion--under more of a microscope. If policymakers have more sophisticated ways to think about achievement, it's beneficial to everyone.
It's also bad in a very selfish sense. This is because the institutions that are already acting on this market inefficiency will see their lunch being shared around the table. The article mentions a couple of these. Tufts University and Oregon State University are using non-traditional approaches to admissions according to the article. Of course, I'm not really serious about this--I'm very happy that there's competition for the meme that SAT is a real achievement score, and I'm confident that a real research program can keep us on the cutting edge of finding the most suitable students for our institution. Ultimately it's good to have competition, and for reasons outlined below, I think each institution will have to find its own solution anyway.
So what are the new methods looking for besides "cognitive" processes? The Group for Research and Assessment of Student Potential (GRASP) has twelve so-called dimensions they consider:
- Knowledge, learning, mastery of general principles
- Continuous learning, intellectual interest and curiosity
- Artistic cultural appreciation and curiosity
- Multicultural tolerance and appreciation
- Leadership
- Interpersonal skills
- Social responsibility, citizenship and involvement
- Physical and psychological health
- Career orientation
- Adaptability and life skills
- Perseverance
- Ethics and integrity
There are already ways to game the SAT. How much more will this be true when the 'right' answers are clearer? That is, it's much easier to appear to have attractive behaviors and attitudes than it is to actually possess them. I can imagine a new preparation industry springing up to coach test-takers who can afford it. Ultimately I think the industrialization of this metric is doomed for this reason. That leaves individual admissions policies to find ways to gather information in ways that are less likely to be faked. The article makes the same point.
The results of SAT's trials with their new test items are interesting. By de-emphasizing academics in favor of the College Board's experimental "biodata" and "situational judgment", traditionally under-served minorities showed significant increases in enrollment. Particularly notable is an almost 6% increase in black enrollments at the highly selective institutions that participated in the experiment.
There are problems, like gaming the system, but the payoff is worth it for individual institutions. The article quotes Pamela T. Horne, who has been working with GRASP results. She gives voice to my feelings on the topic:
“This is mission-driven,” she said, noting that colleges don’t define their missions as “enroll students with high SAT scores,” but they do prize leadership, artistic vision and various other qualities that might now be measured.Unfortunately, if you asked many college presidents and board members, they probably would say that high SAT scores are an institutional priority. Why else would there be all the concern about which scores are reported (first, last, all, average?). Ultimately, more enlightened institutions can take advantage of the limitations of the SAT and other standardized predictors by individualizing their own processes. It's an exciting challenge.
Update: Reading the comments on the article, I found a reference to a textbook on the subject of non-cognitive assessment. Here's the link on Amazon.com. Beyond the Big Test: Noncognitive Assessment in Higher Education (Jossey Bass Higher and Adult Education Series)
by William E. Sedlacek (Feb 26, 2004)
No comments:
Post a Comment