tag:blogger.com,1999:blog-20035359.post3847049624653259254..comments2024-03-21T14:19:30.671-05:00Comments on Higher Ed/: Assessment and Automationdavehttp://www.blogger.com/profile/08633920160358488401noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-20035359.post-8202179803996494702010-01-05T07:23:20.209-05:002010-01-05T07:23:20.209-05:00Dear Robert, Thanks for the comment. If you brows...Dear Robert, Thanks for the comment. If you browse my other musings on this topic, you'll see that I try to distinguish between assessing learning that is largely algorithmic (deductive reasoning), and CAN be assessed pretty easily with some confidence, versus learning that is more complex, like creative/inductive thinking that is not so amenable to standardized-type tests. <br /><br />There is no doubt that there is defensiveness about what goes on in the classroom, and there definitely is a need for more transparency and uniformity. I'm not defending the status quo. On the other hand, I don't think the solution is as easy as conjuring a desired measurement tool regardless of whether one actually exists or not. Brain surgery is relatively easy to assess--if the patient survives and becomes conscious afterwards, that's a big plus. By comparison, doing a lit crit analysis isn't so cut and dried--because there is no 'right' answer. Therefore, the assessments will necessarily be subjective judgments case by case. In this case, the assessment isn't of the right/wrong variety like deductive problems, but rather 'your thinking conforms to current consensus'. Actual new knowledge has to break out of this (at least per Kuhn). Two years ago a test of the question "can students analyze the mid-term financial risks in a P&L and suggest a course of remedial action" would (I hope) look different from the same one today--the 'correct' answers would probably not rely heavily on credit swaps, I imagine. The bottom line is what is the connection to the real world, and unless learning outcomes assessments can actually make and support that connection, they have no claim to be 'measurements', and we should be rightfully dubious of their claims. Who was right: early Wittgenstein or later Wittgenstein? Is that question assessable within the realm of science? <br /><br />I've looked at my post and don't see anything that appears to be ad hominum, but I apologize if anything appears that way. It certainly wasn't my intention.davehttps://www.blogger.com/profile/08633920160358488401noreply@blogger.comtag:blogger.com,1999:blog-20035359.post-90736614054558451492010-01-04T17:53:18.212-05:002010-01-04T17:53:18.212-05:00David,
As far as I can see, we share perspectives...David,<br /><br />As far as I can see, we share perspectives related to the importance of balancing the Janus-headed ideals of precision and realism at the conceptual and methodological levels of assessment. We also share the perspective that it can be difficult (or more) to operationalize and measure some phenomena that we can nonetheless reliably apprehend at the subjective level (although my experiments show the reliability to be less than we believe and often unacceptably low).<br /><br />We appear to take separate roads here: You seem to impute a level of inscrutability to at least some of what there is to mean by higher education's outcomes. I believe (as was overheard in a behind-the-counter employee training session at McDonald's), "This stuff isn't rocket surgery." I am less metaphysical that you appear to be when it comes to framing challenges to measuring higher education's outcomes. For the most part, what I see out there in the classroom is a lack of creativity fueled by defensiveness, arrogance, and a corresponding lack of skills in measurement science. Even some of your comments related to my post seem ad hominum. I see no need for this.<br /><br />As members of the professoriate (in my case, former) should we not be honest in acknowledging that most of our colleagues assess with a series of essays in which the T/R coefficient is considered stellar if it reaches 0.65, or with multiple choice items 45% of which do not pass elementary validity standards, are administered too infrequently, under too few contexts, and are largely inauthentic to the generalizations we would hope are derived by our students? What exactly are you defending here?<br /><br />Of course there are philosophical issues beneath the surface; some of them are unresolved and discussing them is a source of great pleasure to me. This does not change the fact that the majority of the "challenges" we face in becoming more accountable are pedestrian at the scientific level and material only at the level of psychological defensiveness. Don't we want to know measurable things such as, can students analyze the mid-term financial risks in a P&L and suggest a course of remedial action; can they identify the psychological defense mechanisms illustrated in Shakespeare's works and apply them to their case load; can they explain the differences in the early and later Wittgenstein and link each proposition directly to a theory of meaning, showing its influence on modern measurement constructs? And so on . . .<br /><br />I'm sorry but as I learned at McDonald's, it ain't rocket surgery.<br /><br />Robert W Tucker<br />President<br />InterEd, Inc.<br /><br />P.S. You began this post with a quote from me (I must have been sore headed that day) but your response seems to be exclusively directed at what a "Steve" said in response to me.Robert W Tuckerhttp://www.intered.comnoreply@blogger.com