There is also a report on demographic breakdowns called Differential Validity and Prediction of SAT, which shows differences between race and gender groups. On the whole, predictive validity doesn't vary much, especially considering that what we're interested in usually is R-squared.
Since I've been interested in using noncognitive indicators to predict achievement, I looked for some statistics to guide me. All I've found so far is in the paper "Predicting the Academic Achievement of Female Students Using the SAT and Noncognitive Variables" by Julie R. Ancis and William E. Sedlacek. In the study used in the paper, two of the noncognitive dimensions seemed to be the best predictors (my interpretation): realistic self-appraisal and community service. There wasn't information about the residual when GPA and SAT are also taken into account, but putting these together, one can create an approximate "best case" scenario based on the statistics. This is shown in the chart below.
The contribution of the noncognitives was to me disappointingly low, and it is likely even smaller because of the correlation with SAT and GPA, which is unknown here. That is, the green slice might actually overlap with the red or blue ones.
It's interesting to note that that "real-life" correlations like the one in the Ancis and Sedlacek study are lower between SAT and FYGPA than in the College Board's research reports. This has been my experience too, and I can't explain the difference. I came across a rather impassioned argument for dropping the SAT as a predictor in a 1992 proposal from Jonathan Baron at University of Pennsylvania. His correlations are even lower than mine for SAT and FYGPA, and he argues that there's not enough value added by the SAT (after taking into account other predictive variables the university uses) to justify its continued use. He makes an interesting point about the emphasis on SAT in the admission process:
It's interesting to note that that "real-life" correlations like the one in the Ancis and Sedlacek study are lower between SAT and FYGPA than in the College Board's research reports. This has been my experience too, and I can't explain the difference. I came across a rather impassioned argument for dropping the SAT as a predictor in a 1992 proposal from Jonathan Baron at University of Pennsylvania. His correlations are even lower than mine for SAT and FYGPA, and he argues that there's not enough value added by the SAT (after taking into account other predictive variables the university uses) to justify its continued use. He makes an interesting point about the emphasis on SAT in the admission process:
College admissions criteria have major effects on high-schoolThe trend toward standardized testing as a measure of minds is not limited to SAT. Now we have the whole No Child Left Behind apparatus and an increasing appetite at the Department of Education to infect higher education with this philosophy using the likes of the CLA. I hope that instruments using noncognitive assessment can get more attention and be developed into something useful.
education. College-bound high-school students do what they think
will help them get into a good college. Students now spend a
considerable amount of time preparing for the SAT. (When my son
was taught how to take multiple-choice exams in Kindergarten,
when he was in a group of children who could already read, I
complained that this was an inappropriate activity, and I was
told that it's never too early to start preparing for the SAT!)
If they were told that the SAT was not important but their grades
and their achievement test scores WERE important, they might
spend more time trying to learn something and less time trying to
learn how to appear to be intelligent on a test. This might be
reason enough to drop the SAT, even if it were somewhat useful
for prediction.
No comments:
Post a Comment