Wednesday, May 27, 2009

The Dynamics of Interbeing and Monological Imperitives

The title comes from this excerpt of Bill Watterson's brilliant Calvin & Hobbes strip:

The occasion for this was this Chronicle of Higher Education article "Community Colleges Need Improved Assessment Tools to Improve Basic-Skills Introduction, Report Says," which I found courtesy of Pat William's Assess This! blog.

Okay, so it's a bit unfair to unload all that post-modern double-speak when the article itself is actually clearly written. The point is not that the author, Mr. Keller, deliberately obfuscates the matter, but rather falls prey to--
The myth of assessment: if we only had better tests, it would be obvious how to improve teaching.
This shows up early in the piece (emphasis added):
To improve the success rates of students who are unprepared for college-level work, community colleges must develop richer forms of student-learning assessment, analyze the data to discover best teaching practices, and get faculty members more involved in the assessment process[.]
Although this isn't lit-crit gooble-de-goop like "monological imperatives," it's arguably more misleading because of the image of simplicity that's conjured up--the idea that good analysis of test results will show us how to teach better. It's actually a lot more complicated than that. The article goes on to be more specific, describing the results of a paper "Toward Information Assessment and a Culture of Evidence" by Lloyd Bond:
[M]easures should be expanded to include more informative assessments such as value-added tests, common exams across course sections, and recordings of students reasoning their way through problem sets[.]
Value-added tests (such as pre- post-testing) may show areas of improvement, but not why they improved. For that you'd need controlled experiments across sections using different methods, and even then you only have a correlation, not a cause. Same with common exams. Transcripts of student reasoning could be good fodder for an intelligent discussion about what goes right or wrong, but can't by itself identify better teaching methods.

Ironically, the report itself doesn't take that approach at all (page 2 of the report, my emphasis added):
From the beginning of the project, the Carnegie team stressed the importance of having rich and reliable evidence—evidence of classroom performance, evidence of student understanding of content, evidence of larger trends toward progress to transfer level courses—to inform faculty discussion, innovation, collaboration and experimentation. Because teaching and learning in the classroom has been a central focus of the Carnegie Foundation’s work, our intent was to heighten the sensitivity of individual instructors, departments, and the larger institution generally to how systematically collected information about student learning can help them improve learning and instruction in a rational, incremental, and coherent way.
The tests themselves provide rough guideposts for the learning landscape. It's the intelligent minds that review such data that leads to possible improvements (from page 3):
[T]he development, scoring, and discussion of common examinations by a group of faculty is an enormously effective impetus to pedagogical innovation and improvement.
The effective process described is not successful because the exam showed the way, but rather because a dialogue among professionals sparks innovation. I've made the point before that when solving very difficult problems, the most robust approach is evolutionary--try something reasonable and see what happens. This report emphasizes that the "see what happens" part does not even rely on perfect data:
To summarize, encouraging a culture of evidence and inquiry does not require a program of tightly controlled, randomized educational experiments. The intent of SPECC was rather to spur the pedagogical and curricular imagination of participating faculty, foster a spirit of experimentation, strengthen capacity to generate and learn from data and evidence[...]
The important part is not the test, but what you do with the results. This is the opposite conclusion one would reach from reading the quotes from the review article, which immediately devolves to the Myth.

I recommend the primary source as a good investigation, not unduly burdened by the Myth, and full of interesting results. My point is that the general perception of assessment, from the Department of Higher Education in recent times, on downwards perpetuates the idea that all we need is a magic test to show us the way. In fact, it's far more important to foster dialogs among teacher, administrators, and students. Inculcating a common vocabulary about goals is a good way to start (one of the uses of rubrics). The "better test" myth simply feeds the maw of the standardized testing companies, which ironically produce the kind of data that is least useful to faculty who want to improve their teaching and curriculum.

We could describe the assessment investigation as:
  1. Be organized (be reasonably scientific, keep good records, don't fool yourself)
  2. Communicate effectively (develop a vocabulary, share ideas among stakeholders)
  3. Do something (try out things that might work and see what happens)
Note that all of this is mainly directed at the "soft" assessment problem of improving teaching and programs. The "hard" problem of how to "measure" education is a global sense can't be solved in this way.

Finally, to close the loop on the post-modern theme, it's fun to note the concession to political correctness that accompanies any discussion of student remediation, here from the introduction of the article:
[W]e have used several terms: pre-collegiate, developmental, remedial, and basic skills, recognizing that these are not synonymous and that, for better or worse, each brings its own history and values.
I suggest that we add to the list "differently-learned."

No comments:

Post a Comment