Monday, August 03, 2009

Means of Assessment

Over the summer I've had to solve a particularly urgent assessment problem stemming from policy changes that affected our freshmen, which in turn caused programmatic changes in the way the Quality Enhancement Plan is to be delivered. Working intensely with faculty over the summer, we've tried to come to grips anew with the four learning objectives. The assessment plan for the established model is complex and so part of my job was to simplify it.

I think there is an impulse, especially in the glaring light of an impending accreditation, to create an illusion of science by making nice forms and boxes into which one may pour one's classroom or extra-curricular observations. Each of these gets its own rubric, and the theory is that these micro-observations add up to something meaningful in the end. Maybe. But it's not nearly as easy as it sounds. At a minimum, the assessment organizer has to:
  1. Get faculty to construct rubrics they believe in. (Imposing them from the outside isn't recommended for obvious reasons.) This usually entails sub-dividing learning objectives into "dimensions" that apply. For example, maybe making eye contact is a component of effective speaking. Again, the faculty who are going to use the rubric ought to be in charge of it. Then, describing the Likert-scale-like rating system so that there is reasonable inter-rater reliability should be done.
  2. Test the rubrics for inter-rater reliability. No point in going to all this work unless you know the thing performs as expected. It won't, of course, and if you're serious about it, you need to "center" the raters by getting them to agree to agree.
  3. Establish assignments that are to be used for assessment, since using different ones lowers reliability.
  4. Build the systems, policies, and information technology to allow this process to be organized.
  5. Motivate the faculty to actually enter the ratings.
  6. Produce meaningful reports.
  7. Motivate faculty to look at the reports and make changes accordingly.
It shouldn't be a terrible shock that this is a tall task. Although much of this infrastructure existed already for our learning objectives, it didn't seem to be hanging together, nor producing the kind of useful output one would hope for given the amount of work going into it. So, with my encouragement, the assessment committee took out its simplification scalpel. We created a simple hierarchy of assessment activity:
  1. Quantity. Simply count the number of times an activity is linked to a learning objective.
  2. Quality. Faculty discuss the student work linked to the learning objective. This is done in a special meeting of the program, department, or course instructors, and has some minimal structure to it based on a standard IE model (what did we want to do, what did we observe, what can we conclude, what shall we do about it).
This still requires technology to keep track of things, but rather than rubrics all we need is a way to tag assignments with a particular learning outcome. For this first iteration, particular assignments in particular classes are to be tagged with "goal 1", "goal 2" and so on, so that we can find them later to provide the content to discuss in groups. Motivation is still an issue too, but because it's much simpler and more structured, this will be less of a problem, I hope.

Frankly, I think it's still to complicated because our information systems are too complex. I'd prefer a simple-as-possible drop box solution, but we have a learning management system that's reasonably well established, and after discussion with the faculty, I think it's best to stick with it rather than creating something new.

I was interested to see a post on the ASSESS listserv that mirrors this line of thought. Linda Suskie (now at Middle States Commission on Higher Ed.) describes three options for rating evidence of learning, which she defines as "tangible, visible, self-explanatory, and compelling evidence of exactly what students have and have not learned." The three approaches she describes could be distilled as:
  1. Rubrics as I have described them, more or less. She leaves out the details.
  2. Observers writing impressions qualitatively for learning objectives rather than using a rubric (this is probably more work than a rubric, but might be more natural especially for complex objectives).
  3. The third is similar to what I've described: "A third option would be to have the discussion, but have all discussions focus consistently on how well the presentation demonstrates achievement of the assignment's key learning outcomes and have notes of the discussion recorded as direct but qualitative evidence of student learning. This would be sort of like completing just one structured observation guide for all observers, rather than have each observer complete one separately."
If you generalized the third to include all observations across all assignments, you get the sort of thing I was going for. As she says in her concluding line "The key is to be systematic and tie the assessment to key learning outcomes." You can find some slides from a presentation she did here, as well. It's worth taking a look. I find the philosophy common-sense. If the thing doesn't work, you've wasted your time, right? But n.b., I have a bias against complexity!

1 comment:

  1. Anonymous12:54 PM

    Our team at the Gilfus Education Group just released this white paper to provide critical insights to practitioners while clarifying "Social Learning" as a concept.

    Social Learning Buzz Masks Deeper Dimensions Mitigating the confusion surrounding “Social Learning” (Download Here)

    It is our hope that by leveraging socially based technologies the education industry can shape a new educational technology paradigm that realizes the promises of true “Social Learning”.

    By understanding its applications we can create a unique opportunity to improve student engagement, student retention, academic success and overall educational outcomes.

    – Stephen Gilfus, Gilfus Education Group (Founder Blackboard Inc. left the organization in 2007)

    ReplyDelete