Intelligence tests are widely assumed to measure maximal intellectual performance, and predictive associations between intelligence quotient (IQ) scores and later-life outcomes are typically interpreted as unbiased estimates of the effect of intellectual ability on academic, professional, and social life outcomes. The current investigation critically examines these assumptions and finds evidence against both. [...] After adjusting for the influence of test motivation, however, the predictive validity of intelligence for life outcomes was significantly diminished, particularly for nonacademic outcomes.The press on the paper includes Science Daily's "Motivation Plays a Critical Role in Determining IQ Test Scores," and Discover's blog post "IQ scores reflect motivation as well as 'intelligence'."

**We include 'Effort' in our faculty-assessed end-of-semester FACS survey**, and have found a link between grades and this effort rating. Of course, it could just be that professors who thing students work hard also tend to give them higher grades, so over the summer we will look at multi-year correlations to eliminate that confounding factor.

The graphs below (courtesy of Google charts) shows GPA in red, and hours completed in blue above the distribution bars for rated student effort across all classes. The heights of the bars give the percent of the distribution that received that rating. The left one is our first survey, Spring 2009. The one on the right is from Fall 2010. The sample size has increased as we've gotten better participation.

The drop in credits earned is due to more first year students being included in the sample. The year-by-year story is similar, except that the overall averages have an interesting shape as ecological samples from first year to fourth:

The first year students in the graph are the first class to fall under the new (much higher) admissions standards. The number is the average effort rating on a scale of zero (minimum effort) to three (great effort). This is for N=1403, Fall 2010. Note that there is a survivorship bias, so that we'd expect the averages to grow as the time-in-school increases. I don't yet have true longitudinal data.

Inter-rater reliability was measured by finding the frequency of exact matches for two instructors rating the same student. There were 385 instances of this, with a match rate of 50.7%. It's not hard to find the rate of pure-chance matches (dot-product the distribution with itself), but I haven't done that. In the past, the chance of matching randomly has been around 35%. See this source for more on that.

The first year students in the graph are the first class to fall under the new (much higher) admissions standards. The number is the average effort rating on a scale of zero (minimum effort) to three (great effort). This is for N=1403, Fall 2010. Note that there is a survivorship bias, so that we'd expect the averages to grow as the time-in-school increases. I don't yet have true longitudinal data.

Inter-rater reliability was measured by finding the frequency of exact matches for two instructors rating the same student. There were 385 instances of this, with a match rate of 50.7%. It's not hard to find the rate of pure-chance matches (dot-product the distribution with itself), but I haven't done that. In the past, the chance of matching randomly has been around 35%. See this source for more on that.

## No comments:

## Post a Comment