Assessment of learning outcomes and the development of expertise go hand in hand. A while back I saw a nice conceptualization of these working together. It's a Learning Outcome Network Interview Tool from David Dirlam at Hebrew Union College, which he shared with the ASSESS listserv. I've reformatted a description from the linked document:
The interviews will be seeking to discover several dimensions of four types of commitments that learners make on their way to becoming experts. The four commitments are:David and co-author Scott Singeisen wrote an article "Collaboratively Crafting a Unique Architecture Education through MODEL Assessment" [1] that has background and much more detail. They have done some very interesting and creative work in framing a learner's trajectory from beginner to expert. The work is based on a large corpus of empirical data that they have analyzed to identify what they call a Succession Model. This is depicted graphically in the handout posted to the listserv, and I've reproduced it below for your convenience.
Each commitment is realized within a different time frame. It takes no time to begin, a few months to comfortably use an easy strategy, a few years to get good at practical strategies and a decade to make regular contributions to a field.
- Beginning: to try
- Easy: to learn a little
- Practical: to become proficient enough to earn a living in the field
- Inspiring: to make a contribution or unique discovery within a field
The ratings of proficiency are tied to a rubric, but it's not the kind of rubric one typically sees, because of the scope. The text on the graph above is fuzzy, but the highest level of achievement has a bubble that says that the actual frequency is "near zero." What's really attractive about this idea from a pedagogical perspective is that it puts the whole landscape of the educational endeavor into one frame. In the paper, a rubric for architecture is given, ranging from stereotypical misconceptions of beginners through transformative integration of components that is the mark of a brilliant architect.
In the conclusion of the paper, there is a powerful statement that can be used in conversations with faculty about assessment. It gives as outcomes of the research a comprehensive and original theory of development of the discipline, and makes the case that the participation of a group of faculty "dilutes the biases of individuals" to give a good collective result. My guess is that after all this, the faculty would feel pride of ownership in the end result.
Thinking back to my teaching and assessment in math, I think I have approached the idea of assessing creativity tangentially, but never as directly as David and Scott do. For example, to illustrate the role of creativity in mathematics, I typically show students the NOVA film "The Proof," which has an excerpt on YouTube:
But while such an illustration gives math students a glimpse into the life of a brilliant professional, that's as far as it goes. And yet, would it not be a boon to students even at the undergraduate level, to map out for them how deeply one can go into their chosen subject with descriptions of what real expertise entails? I think so. In our current QEP (a SACS project for improving learning) two of the goals are a nice combination of noncognitives: 1. self-assessment and 2. planning for the future. This is a prescription for such a long-range view of a discipline.
I also think the model Dirlam and Singeisen have created complements nicely the rubric strategy I've used before that ties outcomes to the degree program by assessing students relative to an ideal Fresh/Soph, Jr/Sr, or ready-to-graduate student. I won't rehash that, as I've written about it before.
Assessing Creativity is what I set out to write about in this post. The example above is very interesting, and I'm looking forward to learning more about it. Obviously a discipline like architecture requires both analytical and creative skills, but the "long view" of expertise works for both.
You probably know about the Flynn Effect, whereby IQ scores have steadily risen over the years. This has led to unintended consequences, but what I didn't know is that there is (or was) a similar effect in creativity. Check your skepticism at the door for a moment, and play along. The article is in Newsweek, and so as with any popular media story, you can expect an eschatological twist. To wit, the article is entitled "The Creativity Crisis," and dates from July 2010. It describes results of a standardized assessment of creativity (CQ):
Like intelligence tests, Torrance’s test—a 90-minute series of discrete tasks, administered by a psychologist—has been taken by millions worldwide in 50 languages. Yet there is one crucial difference between IQ and CQ scores. With intelligence, there is a phenomenon called the Flynn effect—each generation, scores go up about 10 points. Enriched environments are making kids smarter. With creativity, a reverse trend has just been identified and is being reported for the first time here: American creativity scores are falling.This is a serious study with a lot of data behind it:
Kyung Hee Kim at the College of William & Mary discovered this in May, after analyzing almost 300,000 Torrance scores of children and adults. Kim found creativity scores had been steadily rising, just like IQ scores, until 1990. Since then, creativity scores have consistently inched downward. “It’s very clear, and the decrease is very significant,” Kim says. It is the scores of younger children in America—from kindergarten through sixth grade—for whom the decline is “most serious.”Here's the obligatory "end of the world is nigh" quote from Newsweek:
The potential consequences are sweeping. The necessity of human ingenuity is undisputed. A recent IBM poll of 1,500 CEOs identified creativity as the No. 1 “leadership competency” of the future.
There's a following argument that the neurobiology of creativity is at least partially understood, and that creativity can be learned. I think the first thing to do is start pointing it out where it occurs. I try to do this in math classes, because students have the most trouble with problems that require creative (as opposed to deterministic) solutions.
The article is interesting, provocative even, but I think it misses one thing. To be really productive as a creative person means being productive in a group. The dynamics of sitting alone and composing a guitar piece are very different from knowing how to present a creative idea to a group of colleagues, or to recognized and support creative solutions from others. It seems to me that productive group creativity is tightly linked to emotional intelligence. In fact, it's very odd that we educate students in silos: homework and testing are almost always expected to be done on one's own. Then students get turned loose in a lab or corporate office and have to work as a team. Here's a crazy idea: why not encourage students to assess and monitor their own intellectual and social abilities, and help them teach themselves how best to "plug in" to a working group? Stereotyping, there are the good group leaders (organized, respectful but firm, goal-oriented), the idea people (smart, creative, random, delicate), analytical whizzes (love of technical detail, logical, great deductive thinkers, visual, proud), and so on.
At the very least, it would be interesting to survey perceptions about these skills and attitudes, as well as the perceived overall effectiveness of the group, to see how creative (and analytical, etc) people affect the whole. My guess is that most individuals don't get to perform at their best because the dynamics aren't conducive.
Closing note: My second Calculus II exam is take-home, and I encourage the students to work together on the problems. They just have to tell me who they worked with. This technique has worked well for me before with small upper level classes. It encourages all kinds of good behavior, which (for me) trumps the minor drawback of not knowing who knows what, exactly. I'll find that out on the final exam. Anyway, students tend to pair themselves off by ability level, so there's not nearly as much copy/paste as you might think.
[1] Dirlam, D. K. and Singeisen, S. R. (2009). Collaboratively Crafting a Unique Architecture
The article is interesting, provocative even, but I think it misses one thing. To be really productive as a creative person means being productive in a group. The dynamics of sitting alone and composing a guitar piece are very different from knowing how to present a creative idea to a group of colleagues, or to recognized and support creative solutions from others. It seems to me that productive group creativity is tightly linked to emotional intelligence. In fact, it's very odd that we educate students in silos: homework and testing are almost always expected to be done on one's own. Then students get turned loose in a lab or corporate office and have to work as a team. Here's a crazy idea: why not encourage students to assess and monitor their own intellectual and social abilities, and help them teach themselves how best to "plug in" to a working group? Stereotyping, there are the good group leaders (organized, respectful but firm, goal-oriented), the idea people (smart, creative, random, delicate), analytical whizzes (love of technical detail, logical, great deductive thinkers, visual, proud), and so on.
At the very least, it would be interesting to survey perceptions about these skills and attitudes, as well as the perceived overall effectiveness of the group, to see how creative (and analytical, etc) people affect the whole. My guess is that most individuals don't get to perform at their best because the dynamics aren't conducive.
Closing note: My second Calculus II exam is take-home, and I encourage the students to work together on the problems. They just have to tell me who they worked with. This technique has worked well for me before with small upper level classes. It encourages all kinds of good behavior, which (for me) trumps the minor drawback of not knowing who knows what, exactly. I'll find that out on the final exam. Anyway, students tend to pair themselves off by ability level, so there's not nearly as much copy/paste as you might think.
[1] Dirlam, D. K. and Singeisen, S. R. (2009). Collaboratively Crafting a Unique Architecture
Education through MODEL Assessment. In P. Crisman and M. Gillem (Eds.)The Value
of Design (pp. 445-455), Washington, DC: ACSA Publishing.
No comments:
Post a Comment