Last time I mentioned that I had discovered Dr. Ed Nuhfer's work on knowledge self-assessments. I wrote Dr. Nuhfer to ask a couple of questions about the results, and he was kind enough to respond and give me permission to post his comments here. His remarks are fascinating.
In order to understand the context of my questions (in blue below), you may want to take a peak at the article I cited yesterday: "The Knowledge Survey: A Tool for All Reasons" by Ed Nuhfer and examine Figure 2 in that paper. Don't be confused by the fact that the same graph is referenced as Figure A below. The graph shows self-assessed learning differences (pre- and post-) sorted by the Bloom's level of the item in question.
Hi David—let me see if I can help you here. You stated: "Wouldn’t one expect both graphs to be lower at the right side of the graph? That is, shouldn’t more complex tasks seem more challenging, and therefore inspire less confidence? Or are the data normalized in such a way as to hide this kind of thing?"
"A related question is, if you sort the results by difficulty (either pre or post estimate), is there some trend in the type of question that is more difficult?"
Good thinking. Let's start with the second question first. The patterns in pre- are largely governed by the students' backgrounds in relation to the content the question asks for. It's not difficulty so much as language. Remember these are averages—so on the average, students in a school have had some preparation in some areas and none in others. We find amazing consistence between different sections of the same course.
The post- is pretty consistent between same instructor through multiple course sections. Usually, there are differences we can see between different instructors doing same courses. If students are REALLY learning something, minds should be changed—some valleys in pre's should be peaks in posts. Often, this is not the case. If students come in with knowledge and we just teach to those same areas, we can end up with pre-post correlations of about 0.8. That's NOT what we should want.
On language," higher level" does not necessarily equate to "more difficult." Name the capital of your state is recall and it's easy; name the capitals of all the states is still recall, but it is more difficult. Learning names of twelve people in our class is fairly easy; learning the names of all people in the Chicago phone book is nearly impossible—and it's still just a low level task.
Next, let's see if I can answer that first question, starting with the Figure you sent me. That distribution of Bloom levels is from the very first knowledge survey I ever did—probably around 1992-1993.
Call this Figure A.
Next, let's look at the simple pre-post results in order of the items given in that same class. We now have two ways of looking at this same data.
Call this Figure B – the items are now in the order of course presentation.
Note the general drop in reported achievement (Figure B)-- after item approximately 160. That occurred because of poor course pacing—too much material run through in about the last two weeks.
What Figure A helps to answer is, of the material lost by galloping through it in the final weeks, what was the nature of this loss? Figure A shows that this loss was mostly low-level information. So now, in answer to your question, the higher Bloom challenges had already been done earlier, and there was plenty of time devoted to these. Students who scored badly on first drafts had opportunity to revisit the assignments and revise.
The greatest value of a Figure A lies in the course planning. BEFORE we inflict the course on students, we can know the level of challenge we are going to deliver. This example was for a sophomore course and was about right on the money for meeting most of those students' needs—not a lot of prior familiarity with this material (lots of blue in the figures is not good), but they had good understanding of most of the material by the end (lots of red is good). We hit Vygotsky's zone of proximal development pretty well on most of these students as a result.
The open-ended challenges used for high Bloom levels involved conceptual understanding of science and evaluation of hazards posed to oneself by asbestos and radon. Because these were considered the most important learning outcomes, those are what we focused on most and early, in order to be sure we met them.
There are always more facts that students could know, but beyond what was needed to meet the objectives of this particular course, we didn't worry much about what was lost in the last weeks—(lots of gap above the red is not good) because we had already met the planned learning outcomes very well.
This particular course was redesigned as the result of this assessment by cutting out most of the less essential material altogether. Better to "teach less better;" it really doesn't serve anyone well to present even low-level information through a "drive-by" when students aren't able to learn it well.
One thing I want to add that we stressed in the paper Delores Knipp and I wrote is that Figure A in itself cannot demonstrate that critical thinking occurred. Just because we ask high level Bloom questions doesn't mean that students respond with high-level answers. A reviewer needs to know answers to something like the following: "OK, I can see the high level challenges and I can see that students now register high confidence to meet these. But, just WHAT DID THE STUDENTS DO to demonstrate that knowledge?
To answer that, we need to show the actual assignments/projects and rubrics used to evaluate the students' responses. If we have the assignments, the rubrics and the knowledge survey results, we can then clearly see how well students really met high-level challenges with high-level responses.
Subscribe to:
Post Comments (Atom)
-
The student/faculty ratio, which represents on average how many students there are for each faculty member, is a common metric of educationa...
-
(A parable for academic workers and those who direct their activities) by David W. Kammler, Professor Mathematics Department Southern Illino...
-
The annual NACUBO report on tuition discounts was covered in Inside Higher Ed back in April, including a figure showing historical rates. (...
-
In the last article , I showed a numerical example of how to increase the accuracy of a test by splitting it in half and judging the sub-sco...
-
Introduction Stephen Jay Gould promoted the idea of non-overlaping magisteria , or ways of knowing the world that can be separated into mutu...
-
I'm scheduled to give a talk on grade statistics on Monday 10/26, reviewing the work in the lead article of JAIE's edition on grades...
-
Introduction Within the world of educational assessment, rubrics play a large role in the attempt to turn student learning into numbers. ...
-
"How much data do you have?" is an inevitable question for program-level data analysis. For example, assessment reports that attem...
-
Inside Higher Ed today has a piece on " The Rise of Edupunk ." I didn't find much new in the article, except that perhaps mai...
-
Introduction A few days ago , I listed problems with using rubric scores as data to understand learning. One of these problems is how to i...
No comments:
Post a Comment