This guest post is by Dr. Stein at Tennessee Tech University. He is co-creator of the CAT instrument I wrote about in "Pizza, Dinosaurs, and Critical Thinking."
David,
I enjoyed your recent blog on critical thinking. When we were developing the CAT instrument, we knew there was no way we could measure all aspects of what faculty in all disciplines call “critical thinking.” We searched for areas of faculty agreement across disciplines (not an easy task either) and skills that could be measured in a reliable way in a relatively short period of time with mostly short answer essay questions that would reveal something about how students think. This is clearly a subset of all critical thinking skills, but a subset that may be useful for general assessment purposes because it cuts across many disciplines. There is no silver bullet for measuring all critical thinking skills and we recommend that institutions try to use multiple measures that can provide converging evidence of student learning.
We also agree that an important feature of the CAT instrument is that it is scored by an institution’s own faculty. Engaging faculty in the scoring process clarifies what skills are being measured by the instrument and allows faculty to directly observe student’s strengths and weaknesses in those areas. We have found that these experiences make it much easier to engage faculty in discussions about ways to better prepare students to think critically than simply learning that your students are at, below, or above some national norm. The CAT instrument also provides a model for how faculty can develop discipline specific activities and assessments that are relevant in the courses they teach in their own discipline. This is important because how instructors assess student learning greatly affects what students will try to learn. If tests are geared toward rote retention then those expectations tend to drive student learning toward memorizing information. We hope that the CAT instrument provides a model of how to move away from just assessing and encouraging the rote retention of information. This does not mean that discipline specific knowledge is not important – as you point out discipline specific knowledge is an important part of critical thinking but students need to learn how to use that discipline specific knowledge to think critically and solve real-world problems. We have found that faculty can help their students make gains on the CAT by teaching critical thinking skills that are relevant to their discipline.
Although the CAT instrument involves mostly essay type questions, we do believe that scores assigned at one institution are comparable to those assigned at another. It is possible to compare scores across institutions because of the extensive work that went into refining the detailed scoring guide/training materials, the 2-day regional training workshops for preparing representatives at each institution to lead scoring workshops on their own campus in a consistent way, the use of multiple scorers for each question, and the scoring accuracy checks that we conduct on a random sample of tests scored at each institution to help insure scoring accuracy. The latter feature is relatively unique for a test of this type. The evidence that we have collected, to date, indicates most collaborating institutions are scoring student responses similarly with less than 5% error when compared to scores assigned by our expert scorers at TTU (see a more recent article on our website www.CriticalThinkingTest.org).
The CAT instrument is not appropriate for all institutions, but it may be a useful tool for institutions that want to better connect the assessment of critical thinking skills with faculty efforts to improve those important skills across a variety of disciplines.
Barry Stein
Tennessee Tech University
www.CriticalThinkingTest.org
Subscribe to:
Post Comments (Atom)
-
The student/faculty ratio, which represents on average how many students there are for each faculty member, is a common metric of educationa...
-
(A parable for academic workers and those who direct their activities) by David W. Kammler, Professor Mathematics Department Southern Illino...
-
The annual NACUBO report on tuition discounts was covered in Inside Higher Ed back in April, including a figure showing historical rates. (...
-
In the last article , I showed a numerical example of how to increase the accuracy of a test by splitting it in half and judging the sub-sco...
-
Introduction Stephen Jay Gould promoted the idea of non-overlaping magisteria , or ways of knowing the world that can be separated into mutu...
-
I'm scheduled to give a talk on grade statistics on Monday 10/26, reviewing the work in the lead article of JAIE's edition on grades...
-
Introduction Within the world of educational assessment, rubrics play a large role in the attempt to turn student learning into numbers. ...
-
"How much data do you have?" is an inevitable question for program-level data analysis. For example, assessment reports that attem...
-
Inside Higher Ed today has a piece on " The Rise of Edupunk ." I didn't find much new in the article, except that perhaps mai...
-
Introduction A few days ago , I listed problems with using rubric scores as data to understand learning. One of these problems is how to i...
No comments:
Post a Comment