Showing posts with label assessment institute. Show all posts
Showing posts with label assessment institute. Show all posts

Thursday, October 29, 2009

Critical Thinking Buzz

Yesterday I promised to revisit the different points of view about critical thinking evidenced at the Assessment Institute. This is a big topic, and I've babbled about it before (here). It's important because so many institutions plug it in as a learning objective. It's especially dear and near to the circulatory organs of liberal arts schools.

First, I have some bad news. Just look at the graph of search hits from google trends to unveil the sad tale.
The story is clear: searchers are becoming less interested in thinking and more interested in ignorance, with the latter spiking in about June 2009. In case you think student engagement might be riding to the rescue, the sad little red line at the bottom says otherwise. Interestingly, searchers seem to lose interest in either aspect of cognition around the holidays. After much research, I discovered the reason behind this, shown below.

Okay, maybe that's enough silliness :-)

Here's what I heard about the subject at the conference. Three things. First, as I mentioned yesterday, the opening plenary revealed a divide among the panelists when the topic came up as a part of a wider conversation. I was writing furiously and am not sure who to attribute this statement to:
There is no evidence that there are generalizable skills like critical thinking. You have to master one domain first.
This echoes my own thoughts and classroom experiences on the matter, which I've described previously. In a nutshell: developing significant ability to do deductive reasoning is a prerequisite to doing interesting inductive reasoning. If this is true, major programs are more likely to cultivate complex thinking skills than a broad curriculum like general education.

Other panelists disagreed, but there wasn't time to have a proper debate on the issue.

The second encounter was in a session about integrated assessment, where gen ed-type skills like communicating and thinking are threaded throughout the curriculum. Critical thinking was explicitly one of those. During Q&A I offered my own (heretical, I suppose) thoughts on the topic and was rather sternly admonished that the problem had been solved by psychologists and all I had to do was look in the literature. Heck, that may be true, and I've started looking. I'm a math guy after all, not a psychometrician. But if there was overwhelming theory and empirical evidence for a particular model, why is there still debate? Is it like Darwinian evolution, where some simply reject it because of dogma? I doubt it, but I'll try to find out. In the meantime, color me dubious. I asked the session speaker for some references yesterday by email.

I did find an accessible article by Tom Angelo, who was on the plenary panel, called "Beginning the Dialogue: Thoughts on Promoting Critical Thinking." It was published in 1995, and the opens by saying about critical thinking that "Despite years of debate, no single definition is widely accepted." This was actually confirmed in the session I mentioned, where it was taken for granted that critical thinking in an English course is different from in a Math course. This by itself isn't fatal to the idea; after all, writing is different too, but we can try to teach writing across the curriculum. But the subtleties are important.

For one thing, on a very basic level, I can watch Tatiana write a paragraph and say with confidence "this student did some writing." Evidence of thinking is different. I can look at that same paper and try to imagine what went on inside her head when she wrote it, but I can't really know that she was thinking at all. Maybe she put random words on the paper, or quoted something she had memorized. There's an empiricism gap. I can count words written. Quantifying (even with a binary yes/no) critical thinking is not so straightforward. To push that a bit further, imagine taking a piece of work from an English class and sticking it into a stack of math papers being graded. The math prof squints at it and wonders what the heck this is. Can the prof then pronounce whether or not critical thinking has taken place in the English class by inspection? Or is he/she only competent to judge in the domain of math? Reverse the situation. An English instructor sees a complex page of handwritten formulas and text, purporting to settle the Continuum Hypothesis once and for all. If you don't have technical expertise in an area, it's virtually impossible to judge what level of thinking has occurred. But maybe that's not what we mean. Here's a definition Dr. Angelo likes in the article, quoting Kurfiss:
[Critical thinking is] an investigation whose purpose is to explore a situation, phenomenon, question, or problem to arrive at a hypothesis or conclusion about it that integrates all available information and that can therefore be convincingly justified. In critical thinking, all assumptions are open to question, divergent views are aggressively sought, and the inquiry is not biased in favor of a particular outcome.
Allow me to point out a couple of things here. First, the creation of a hypothesis is (if true) the creation of new knowledge. And because we want it to conform to facts and hold up to new evidence that arrives, this is very similar to inductive reasoning. It sounds like the scientific method. But in order to do any kind of inductive reasoning, you have to have some knowledge of the deductive processes active in that domain. You can't write a math proof without knowing propositional logic; you can't solve a problem the rudder on your 747 unless you know how the thing works; you can't create complex financial leveraging instruments unless you understand the risks. Well, maybe that last one was a bad example.

One curious aspect of critical thinking assessment is that although the language surrounding it mentions all kinds of desirable habits of mind, higher ed kind of dodges the issue. Maybe there's some institution out there that really tackles teaching of self-assessment, open-mindedness, thoroughness, focus, and so forth. I'd love to know. Outcomes like what a student wrote on a piece of paper is far downstream from the actual events that led to its creation. There are some surveys that try to assess habits of mind--the CIRP for one. Who's trying to teach them that outside of orientation class? There is therefore a curious disconnect between our actual desired outcomes and what we teach.

I think it's healthy to ask another level of why. Why do we want students to think critically? We get stuck in our own curricular bubbles, perhaps. Let's step outside for a moment. What is gained if critical thinking--however you define it--is employed?

Is it because we want active citizens? Or we want good problem-solvers? Or we want entrepreneurs? Whatever the answer is, it's likely to be more easily pinned down than the amorphous one of critical thinking. We can actually look at evidence of citizenship or problem-solving. We can create programs and curricula to address them. And don't forget the habits of mind thing--for my money, that's an untapped vein of gold, and also amenable to assessment. At the right point in the report cycle, the assessment director can still aggregate the heck out of all the types of "critical thinking" on record and serve up a glop of statistical goo to the admins. Just put the graphs in color--that means more than any data you put on there. And be sure you put the right logo on the thing.

The third encounter with critical thinking I had was indirect; I heard about it through a conversation. What I was told was that on a Tuesday session U. Phoenix presented served up "Assessment Methods: Creating a Critical Thinking Scoring Instrument as a Tool for Programmatic Assessment." I can't find the slides online, but I'll try to get them, and ask the presenters what happened. Apparently there was essentially heckling of the presenter(s) about the definition of critical thinking, and my companion's assumption was that this was related to the fact that it was U. Phoenix presenting and not, say, Alverno College. If so, this is a sad irony. If a group of professionals gather to talk about critical thinking and don't actually demonstrate the ability to do it, where are we? I will suggest to the organizers that they replace iced tea refreshments with Scotch next time.

Wednesday, October 28, 2009

The 2009 Assessment Institute

I'm just back from two full days in Indianapolis at the Assessment Institute. The trip was great. I made contacts and learned some really interesting stuff. Too much to summarize this morning, but here are some bits and pieces to be filled in later.

The opening plenary was shortened by an unfortunate medical incident, but hinted at some interesting fault lines. Trudy Banta, the organizer, does a good job of representing different points of view with her choice of panel, guest, and topic.

The first divergence I noticed centered on the idea of "tuning," a concept borrowed from the Bologna Club in the EU, which I blogged about here and here (the update at the bottom). It's not a lunch meeting, as you might think, but a process of comparing and improving programs across institutions. As a point of trivia, we learned from Jeffery Sybert that concert A is 440 Hz in this country, but 442 Hz in Europe. I'm quite sure I couldn't tell the difference, which is in any case greater than the intra-tuning dissonances of an equi-tempered scale. There must be an interesting story there. The topic was presented by Jamie Merisotis, President/CEO of Lumina Foundation for Education. You can read more about the project here. The panel dissonance in this case was fairly minor, turning on the question of how much authority faculty should have in the process. On the one hand, faculty own curriculum. On the other, they can be protective, self-interested, and unduly academic in their horizons (speaking as one). This is a good case of needing external reviewers to constantly check that goals and progress align. (Such as a stakeholder analysis: see below.)

There was the admonition from the panel that for-profits and their venture capitalist underwriters are very interested in producing a meaningful educational product, and by implication that the usual plodding change of bricks and mortarboard institutions won't be fast enough to compete with cyberdon. As a riff on that, George Kuh suggested that "the credit-hour is a dying concept in higher education, measuring things we no longer value." He elaborated, equating seat time with credit hours under the traditional system.

The other twanging note in the plenary was caused by the use of the term "critical thinking." The consensus position is that this skill is valuable, should be taught and "measured," etc. In opposition to that is the idea that maybe critical thinking is too fuzzy to actually be useful as a learning objective. This turned out to be a minor theme at the conference, which I'll elaborate on tomorrow. In a Q&A at one session, I was given an admonishing mini-lecture that psychology had solved the critical thinking problem and that I only needed to look at the literature. In a session with University of Phoenix that I missed, I'm told that there was rude and contentious debate about assessing critical thinking. Maybe someone who was there can comment.

I met the creator of Waypoint, which I had blogged about here. I watched a demo of the online rubric management and implementation software. More on that later.

At lunch on Monday, Jon and I chatted with an Assistant Director of CIRP at HERI [edit: fixed title] who works on CIRP. I've used the freshman survey, and found it quite useful in finding attrition trends (see this post), and I was interested to learn that CIRP is getting into the constructs business, using item response theory. I went to the session on that on Tuesday, and have some other comments that will have to wait until I have more time.

A session by Tom Zane at Western Governors University gave a fascinating insight into their system of assessing massive amounts of student work. You can read about their innovative system of using assessments to entirely replace grades here. They get between 1500 and 2000 new students each month, and are still growing rapidly. Student work is assessed by human raters using rubrics--this isn't standardized test land. Samples are rated more than once to test for reliability. Although this is still monological, my first impression is that this is as good as it gets for traditional assessment, and that the assessment side of their business could plug into this model if WGU chose do to it--providing a uniform system of credentialing for higher ed. Something like that is conceivably in our future. That might sound scary, but it's infinitely better than standardized tests running our lives.

AAC&U has a project called VALUE, which accumulates rubrics that they have found or created and refined. This seems useful, and complementary to the tuning idea. The next part of the project is to create a repository of student work that has been rated using the rubrics. A problem I noticed more than once is that designers usually don't seem to think much up front about whether their rubric is relative to the curriculum or fixed absolutely. When we built the FACS model, we used an absolute scale, in which raters say things like "student Ecks is working at the freshman/sophomore level." In a relative scale you get stuff like "exceeds expectations." The former is great for tracking longitudinal progress, the latter not so much. A good student will exceed expectations in all classes, showing no progress. By contrast, even a great freshman math major is very unlikely to be doing senior-level work. I'm not sure the VALUE leaders have addressed this.

Jon Shannon and I led a 75-minute session on stakeholder analysis in strategic planning, which had good participation. I blogged about the topic here, and you can find the presentation linked here. This is a great tool for addressing complex planning issues. One of the advantages is that it keeps the conversation on track, focused on goals everyone more or less agrees on.

The backchannel on twitter was pretty thin, or else I just didn't hit the main vein with my search. We set one up at Today's Meet and advertised it on one of our slides, but haven't gotten any activity from participants.

More on some of these topics later.

Friday, October 16, 2009

Prezi for Presentations

Now that PowerPoint presentations have reached their peak in creative expression with the brilliant summit of art embedded below, it's time to move on to other media. In the era of Web Duh Punt Null, new media arrive shiny and gleaming in your in-box about every 10 seconds anyway.


The new kid on the block is Prezi, a Flash-based composition and presentation package that you can use for free, or upgrade to paid packages for more features. In practice, you'd want to upgrade because otherwise everything you create is instantly public. Either way, you can either use an online navigation page or download your stuff along with a desktop presentor to put on a USB drive. If you want to be able to compose offline, however, you have to buy the expensive package. I tried Prezi out last weekend in preparation for a workshop I'm co-presenting at the 2009 Assessment Institute later this month in Indianapolis. It's a great conference for both the presentations and making contacts. It's a very different feel from the SACS meetings (regional accreditor), where you can judge how close someone is to the visit by how much they sweat and the degree to which they jump at sudden movements. By comparison, the Assessment Institute is relaxed and intellectual.

It's easier to show you how Prezi works than to tell you, so I've embedded the draft I've been working on below. You can use the arrows at the bottom to follow the flow, or freely click around, zoom and drag.



The same presentation is here on the Prezi site. The topic of the workshop is strategic planning, and particularly a nice method I learned from Jon Shannon on doing an inventory of stakeholders, their goals, and possible solutions to tactical or strategic challenges. I blogged about that topic in A "Useful Planning Technique." The session is at 10:15 on Tuesday, October 27, and here's the full description:
Stakeholder-Oriented Assessment and Planning
Stakeholder-Oriented Assessment and Planning (SOAP) is an approach to assessment and planning that facilitates efficient discussions and promotes confidence and early alignment of participating groups. By identifying the stakeholder-goal landscape relevant to an assessment or planning target, participants quickly reveal and document the existing and desired state of the system.