Thursday, October 22, 2009
There is a new web-based rubrics + database learning outcomes software called Waypoint Outcomes from a company called Subjective Metrics. At least, it's news to me. It's nice to see "subjective"" used in an assessment context. I think people tend to think subjective = unreliable instead of subjective = complex, and avoid talking about it for fear of execration. I've had psychologists talk to me after my conference presentations on FACS, amazed that anyone would admit to using subjective assessment, as if they expected a mob with torches and pitchforks to appear any time.
There is no demo online, but there are a couple of videos. The CEO shows off the product and reminds us why we need to do assessment (we in higher ed hardly need that!). He uses the word "subjective" often, which warmed my heart, but the content of the videos were a bit hit and miss for what I was looking for. So this isn't really a review, just some impressions from the marketing materials.
Waypoint Outcomes integrates with Moodle and other learning management systems, with the intent of providing a plug-in for assessment rather than provide all the functionality of a LMS--a very sensible approach. Assignments get tagged with outcomes, each with a rubric that you can assign and comment on with observation, advice, and reference. I could not tell from the video how these outcomes are organized--are they in hierarchies by subject matter, or just an alphabetical list? Can you tag learning outcomes to organize arbitrary groups? For example, maybe your Quality Enhancement Plan uses two from general education and two new ones. Can you group them easily? Not sure.
Ratings are assigned directly in the rubric form online, with fields to give students comments on each outcome. You can assign weights to each outcome, so that an overall score is generated (i.e. create customized statistical goo). It has a robust database on the back end, allowing filtering, aggregation, and reporting. One nice feature is the ability to randomly sample student work for after-the-fact assessment, possibly by someone other than the course instructor. Of course, in order for that to be successful, the assignment has to be available too. I don't know if there's a strong link in the system to the assignment, so that it could be referenced as a hyperlink, for example. I can imagine wanting to generate a list of all assignments in a particular discipline that used a certain learning outcome.
My overall impression is that this is a solid design for keeping track of rubrics and ratings. User-interface design is critical, however, and I don't have any sense how easy it is to use to construct new rubrics, attach them to assignments, and pull them up later.
The central problem with the whole method of on-the-spot rubric ratings is the time involved on the instructor's part. This system would work great if every instructor was assigned a rubric rater to do all the assessments. You wouldn't really want to the instructor rating his/her own work anyway would you? If the scores are to be used to judge how much was learned and contributed to performance ratings for an instructor, you wouldn't want to allow self-ratings of performance, which is in effect what these are. Separating teaching from assessment tends to devolve to standardized testing, which can't be a complete solution. Having instructors rate each others' students wouldn't work either--it would be too political.
An approach I like, which would be doable with the Waypoint software, is to tag assignments with the learning outcomes--but not too many of them--and use a small committee of teachers of that subject to periodically scan the student work in the lens of the rubrics and outcomes, and reach some consensus about how well we're doing. That is, rate the work as a team, after the fact, and only for a sample. This creates a nice dialogue that can lead to solving complex issues. The idea of rating every student on every outcome all the time is noble, but you may need to hire more people to pull it off.