Saturday, May 08, 2010

Culturing Assessment and Scoring Cows

First, start with a Petri dish.  Leave it opened during a faculty meeting with a reasonable amount of humidity in the room.  After a day or so, you should see a culture of assessment that looks something like this:

Just kidding.  If only it were that easy.  I got an advertisement from Academic Impressions a couple of days ago entitled "Creating a Culture of Assessment." Its arrival coincided with a week here of assessment meetings, so it was interesting to read the advice in the flyer while I was living it.  Here's the premise:
It's clear that often the roadblock to action isn't a lack of data, nor is it the lack of an assessment process. The roadblock is the lack of a culture of assessment on campus. As one example, 66% of provosts surveyed said what they needed most in order to translate assessment into action was more faculty involvement.
It was gratifying this week to see how engaged faculty were with their assessments, both in programs and for general projects like liberal studies and our quality enhancement plan.  I'm still new here, and it's frankly amazing how seriously everyone took the exercise.  As I mentioned in an earlier post, I started the core skills survey and got a great response. 

But not all is peaches and screams, and the article outlines some of the issues I saw, but not in the way you might expect.  The author, Donald Norris, tells us that a central question that can head off turf battles and focus the discussion is:  "How can we use assessment tools to maximize our institution's performance and the success of our students?"

I think that's an okay question, but not the one I'd use.  It sounds too technical, and puts assessment tools front and center like a dentist's tray of gleaming pointy things.  It limits the discussion, and implies that we have to have an apparatus in place in order to improve teaching.  It sounds like business-speak.  To a professor, "institutional performance" is pretty abstract. 

The question I start with depends on the situation, but a good one is "Are your students demonstrating that they can think/communicate/work?"  This is the simplest place to start.  Here's a sample result from the question "How well can your students speak?"


I showed early results from this survey to about 30 faculty in an assessment meeting, and productive conversations sprung up immediately.  They compared notes about how they had rated students and why.  Do you take into account how they speak informally?  Maybe that depends on the class.  Are we communicating these expectations to the students?  For intellectuals, a good question  is worth a hundred directives. 



Another quote from the ad:

Building expectation for action across departmental silos starts at the top. Norris advises that institutional leaders -- president and provost -- need to set specific performance metrics and hold their direct reports accountable. "Make it transparent how data informs decisions," Norris advises. "Build a performance culture among top officials first, set expectations for improved performance. And always connect what you're doing back to how improved performance means improved success for the students."
Top-down pressure is unfortunately needed to launch assessment efforts sometimes.  In my experience, accreditation requirements help.  But that card can be overplayed.  However, I think Norris is off-base with the setting of performance metrics.  That probably works great in a pickle factory, but classrooms aren't industrial assembly lines. A performance culture is not an assessment culture.  They point in different directions. I saw this very clearly in one of the assessment meetings this week.  The faulty had spent two days analyzing standardized test scores and summative statistics from areas within the discipline.  As data goes, it was pretty comprehensive, and they had worked very hard on it.  There was some discussion about whether second readers were needed, given the high inter-rater reliability they had found before.  It was a sophisticated conversation about metrics.  But each one of the reports had a common problem: what to do with the results?  The graphs were pretty, and the percentages had +/-stderr on them, but in an operational sense they had little meaning.

One of the presenters spoke to me afterward.  "We did two days of work," she said.  "But I'm not sure we're getting anything out of this. It doesn't tell us anything."  This echoes the first paragraph of the piece I'm quoting:
A report by the National Institute for Learning Outcomes Assessment this week drew attention to the fact that while 92% of American colleges and universities are now using at least one assessment tool to evaluate academic programs, most colleges are having difficulty integrating the results into a system of continuing improvement.
Exactly, and the problem, not the solution, may be too much emphasis on metrics.  Norris has had success with using metrics and transparency before in the context of finances, where metrics are natural.  He wants to transfer that idea to learning outcomes. 
The expectation was communicated and reinforced that faculty and academic leaders would make decisions based not on personal values or anecdotal evidence but on analysis of real data and actual performance.

Norris adds, "This is happening in a lot of places. It's not always pretty. It's never pretty when someone's sacred cow gets scored, and there's a need to change." The key is to keep the focus on better outcomes for students.

I assume it was unintentional, but the phrase about someone's sacred cow getting scored (rather than an ox getting gored) is great.  Thank you Mr. Norris, I'll use that.  And I want to see your rubric for scoring cows. 

ability then you're in trouble again.  There are too many dimensions.  How about "percent of grammar mistakes (per word written) in a course-assigned literature review."  That you can count.  Setting performance goals probably won't work as intended, but at least the problem is tractable.  (Read up on teaching composition and go talk to the faculty to see how devilishly complicated this is.)

Well, what works then?  As it turns out, we're starting a new program in visual and performing arts.  So this week I was involved with helping the faculty construct their learning outcomes and assessments.  It's not finished, but after about three hours of talk and sketch, we're pretty far along. 

They started with a sad-looking rubric that had been handed down from True-Outcome days.  It had some learning outcomes listed down the side, and a generic rubric scale with no detailed description--exactly the kind of mass-produced "metric" that gets cranked out under pressure.  They didn't know what to make of it.  So we started over.  The question we started with is: what are the most important things you want your students to learn?  They talked, and I wrote on the white board, prompted with more questions, and tried to organize their response.  I'm not the neatest at the board, so don't laugh.  A piece of it is shown below.

The ability for students to critique another's work was on their list, as were Expertise, Presentation, Kreative work (I had to use a K because C was already used, and I needed to abbreviate with initials), and Job Prep.  After a relatively short time, we had  good descriptions of each.  I framed the next part of the conversation with these questions for each outcome:
  1. In which courses or extra-curriculars is this outcome taught?
  2. What evidence will we generate pertaining to this outcome?
  3. How can we review it and give feedback?
Number one on the list led to a quickie curriculum map, not by course (that's their homework), but by year.  

It's probably impossibly to decipher, but we discovered what should go where, sometimes breaking the big outcomes into components.  Question two led to interesting discussions about what kinds of student performances happen when.  The faculty were very interested and engaged.  Then we got to the assessment part in number three.  We focused on authentic assessment based on classwork.  After all, if we're doing assessments and not sharing them with students, isn't it a waste? We talked about the value of setting expectations, tracking progress, and then having a formal evaluation.  That picture looks like this:

One of the components of performance was the evolution of the work toward a final product, so we looked at a performance as a longish process, with documentation along the way.  This varies by discipline.  For design it might be an work contract, a portfolio of contact prints, sketches, and other intermediates, an artist's statement and culmination in a gallery show.  The totality of this is then assessed and given to the student as feedback (and I assume forms part of the grade). 

Some parts of this might be suited to metrics, like course completion rates.  But that's more an administrative thing, suitable for a department chair's consideration.  The performance evaluation, on the other hand, is not suitable as a metric.  As faculty shape the program, they'll be setting standards and expectations for students, and creating a culture dedicated to the craft.  Because the assessment pieces are part of the process, it's natural that they will evolve along with the curriculum in response to results.  Stay tuned.

No comments:

Post a Comment