In our analysis of data from 3,081 students at 19 institutions in the first round of the study we found that, on the whole, students changed very little on the outcomes that we measured over their first year in college.The provide an overview of findings (pdf) that give more details. In particular:
[A]lthough students’ improvement on the CAAP Critical Thinking test was statistically significant, the change was so small (less than 1% increase) that it was practically meaningless.You can find two examples of the sorts of questions that the CAAP employs here (pdf). One asks a question about polling, referring to a hypothetical politician named Favor:
Favor's "unofficial poll" of her constituents at the Johnson County political rally would be more persuasive as evidence for her contentions if the group of people to whom she spoke had:The results of the CAAP and other surveys (such as student attitudes and behaviors) were correlated against six "teaching practices and institutional conditions:"
I. been randomly selected.
II. represented a broad spectrum of the population: young and old, white and non-white, male and female, etc.
III. not included an unusually large number of pharmacists.
The summary of results shows that "critical thinking" component was correlated positively and significantly with "good teaching" and "diversity experiences."
- Good Teaching and High-Quality Interactions with Faculty
- Academic Challenge and High Expectations
- Diversity Experiences
- Frequency of Interacting with Faculty and Staff
- Interactions with Peers
- Cooperative Learning
I'm not a fan of the status quo in general education, but it does seem only fair that if we're going to judge accomplishment using tests that the tests align with the curriculum. Perhaps at the participating institutions, this is the case, but I have trouble seeing where items like those in the critical thinking part of the CAAP are actually used in first-year liberal arts curricula. Certainly, many schools have critical thinking listed as a goal, but it gets defined in many ways. I don't like the term because of its fuzziness, and this example shows that well.
Take the example given about poll sampling. The answer can be arrived at by a bit of common sense, in which case this resembles perhaps an IQ test, or one might have encountered sampling in a Finite Math course or Intro to Psychology. But it's not exactly the level of material that students come to college for. In finite math, they might learn linear programming, which is a fairly complex analytical tool used to solve constraint problems. Of course, anyone who hasn't had that material would fail miserably.
But isn't that the point? By trying to "measure" (see Measurement Smeasurement for an explanation of the scare quotes) only the least common denominator of freshman learning--a prerequisite to standardization--aren't we really just applying an IQ-like test? It would be better to use targeted tests that correspond to the actual curriculum that students actually take, rather than imagining that can all be treated uniformly. Put another way, we shouldn't be surprised if students don't perform well on tests that don't correspond to what we've actually taught them.
Other posts on critical thinking are listed here.