Monday, September 28, 2009

Predicting Success

Nature abhors a vacuum it's said. My daughter showed me the other day how her science class used this "principle" to determine the amount of oxygen in the air by using a candle and a test tube in water to measure before and after. Of course, it's isn't really abhorrence but air pressure that makes vacuum a chore to create down here where we live. But maybe economics really does abhor a passed-over opportunity. As in the old joke where one economist says "hey, there's a hundred dollar bill laying there on the ground," to which the other replies "can't be so--someone would have picked it up."

I've argued for a while that the low predictive validity of GPA + SAT creates market opportunities for those willing to experiment with other demonstrations of achievement. Here's a list of previous posts related to the topic:
In Malcolm Gladwell's Outliers, he has the following observation about the international math and science test called TIMSS . He notes that the test is accompanied by a 120-question survey, which many students don't complete. In his words:
Now, here's the interesting part. As it turns out, the average number of items answered on that questionnaire varies from country to country. It is possible, in fact, to rank all the participating countries according to haw many items their students answer on the questionnaire. Now, what do you think happens if you compare the questionnaire rankings with the math rankings on the TIMSS? They are exactly the same. (pg. 247)
He concludes a page later that "We should be able to predict which countries are best at math simply by looking at which national cultures place the highest emphasis on effort and hard work."

One could certainly ask for better analysis--why not correlate by student the number of survey items completed against the math score, rather than aggregating by country? But the sentiment is certainly the same expressed in noncognitive literature: there's more to success than the ability to do mental gymnastics.

In InsideHigherEd today there was an article "Next Stages in Testing Debate" talking about institutions that de-emphasize SAT in admissions decisions:
[A] common idea was that decreasing reliance on the SAT does not mean any loss of academic rigor and can in fact lead to the creation of classes that do better academically (and are more diverse).
This may require a rethink and additional training of admissions staff:
That report said that for too many admissions officers, the only training they receive on the use of testing may come from the technical training provided by testing companies, entities that have a vested interest in the continued use of testing.
Some of the other types of accomplishments sought by admissions officers are evidence of creativity, practical skills, wisdom about how to promote the common good (Tufts), and essays (George Mason). This idea is something that seems to be blooming. As Mr. Gladwell might say, it's blinking toward an outlying tipping point.

Evidence of this arrived in my in-box the other day: a forwarded email from the Law School Admisssion Council (LSAC) with the following news:
LSAC has funded research on noncognitive skills for some time. A study funded by LSAC—Identification, Development, and Validation of Predictors for Successful Lawyering by Marjorie Shultz and Sheldon Zedeck—identified 26 noncognitive factors that make for successful lawyering. The study included suggestions for assessments that might measure those factors prior to admission to law school.
I hadn't heard of Shultz and Zedeck, so I scurried off to the tubes that comprise the Internets to find out more. You can find the whole 100-page report and more here. The executive summary has an imposing, lawyerly warning on the front page:
NOT TO BE USED FOR COMMERCIAL PURPOSES NOT TO BE DISTRIBUTED, COPIED, OR QUOTED WITHOUT PERMISSION OF AUTHORS
I suppose this implies an argument that fair use somehow doesn't apply to this work. In any event, since I've obviously already violated the terms with the quote above, I may as well proceed.

Rather than focus on something easy like just predicting law school GPA, the researchers actually assessed job performance and got LSAT and law school performance data. Then they threw a bunch of tests at the problem, like Hogan Personality Inventory, Hogan Development Survey, Motives, Values, Preferences Inventory, and Self Monitoring Scale, a Situational Judgment Test, and Biographical Information Inventory. This on a sample of more than 1100 subjects. I think the word I'm looking for is "wow."

The results found successful indicators of effectiveness (per their definition) that seemed to be assessing independent characteristics, adding dimensionality to the standard predictors. In fact, the LSAT didn't seem to predict success at all. Go look at the executive summary for details--it's not very long.

All in all, this seems to be a solid study that shows that noncognitives are important--perhaps better--predictors of professional effectiveness than the traditional cognitive ones.

Other noncog stories in the news:
The College Board is even getting into the act. A 2004 publication talks about "individualized review" that includes factors beyond GPA and test. I am led to understand that they have a big project underway now on noncogs, but I can't find the website.

Okay, remember the vacuum? What if we managed to identify these noncognitive variables and start to use them? The LSAC folks outline what can happen next:
A major concern about developing an assessment for noncognitive factors is the possibility that the test would be so coachable that its results would be unreliable in the high-stakes environment of law school admissions.
I argued here (Zog's lemma), here, and here that any imperfect predictor invites error inflation for economic gain, but it doesn't take a genius to see that if checking "I'm a hard worker" gets me more financial aid, I'll be more inclined to overestimate my industriousness. This is a game theory problem with no solution that's likely to be mass marketed. Imagine if the assessment of prospective students had to be done laboriously by hand by highly trained admissions staff, instead of relying on a convenient test that cranks out a one-dimensional predictor. I'm not sure that's a bad thing.

1 comment:

  1. That's an interesting study, and it seems intuitively correct that personality factors influence success more than academic factors. In this particular study, though, no wonder undergraduate GPA and LSAT were such poor predictors--the range was really restricted. The average undergraduate GPA was around 3.7, and the average LSAT was around 165, which I believe is the 90th percentile or so.

    ReplyDelete