Friday, April 29, 2011

Small College Initiative

I attended the SACS Commission on Colleges Small College Initiative this week. You can find the slides on the SACS web site here. Below are some of my notes and observations.

Mike Johnson talked about CR 2.5 (Institutional Effectiveness) and pointed out the difference between assessment and evaluation. In my interpretation of his remarks, the former is gathering data and the latter is using it to draw conclusions for action. In my experience, this gap is where many IE cycles break down. Signs of this are "Actions for Improvement" that:
  • Are missing altogether
  • Are too general or vague to be put into practice
  • Report that everything is fine, and no improvements are necessary
  • Suggest only improvements to the assessments
I have a lot more to say about this (there's a surprise), and am preparing a talk for the Assessment Institute and a paper for NILOA on related topics.

Another important point is that IE cannot be outsourced or even in-sourced to a director. The whole point is that is is a collaborative exercise in striving to achieve goals. I think results are proportional to participation. In a similar vein, Mike noted that computer software can help organize reporting, but doesn't magically solve the problem of generating quality IE loops. Garbage in = garbage out.

A wonderful suggestion was to use the creation of "board books"as a way to encapsulate IE reports in a natural way that's already being done. Mike's larger point here is that we already have many real IE processes--all institutions that manage to survive use data one way or another--and there's no need to create an artificial one for reporting. I saw this during a review, where the institution had wonderful processes in place, but didn't include that documentation in the compliance certification, and instead reduced all that rich information into a four-column grid that "looks like it's supposed to." Of course, one problem here (in my opinion) is that there doesn't seem to be a standardized way to look at IE processes. If we were serious about it, we'd do inter-rater reliability studies and create tight rubrics with lots of examples in a library, showing what's acceptable and what's not. I think this would go a long way toward reducing the number of out-of-compliance findings. Way back when--over a decade ago--I heard a SACS VP complaining that even back then, IE had been around a long time and college should know what to do by now. That's true as far as it goes, but it should be acknowledged that: 1) it's very hard to satisfy committees, and 2) it's not entirely clear what is acceptable and what's not. Part of the problem is that while the theory of IE loops is easy to understand, practice is far more difficult. Sort of like Socialism.

There was a clarification that it is acceptable for institutions to sample programs for reporting 3.3.1.1 in lieu of reporting outcomes for every single one. There is supposed to be a policy statement about this on the web site, but I couldn't find it after several minutes of searching the list for 3.3.1, effectiveness, outcomes, sampling, etc. If someone finds it, please let me know. The main thing is that it should be representative and not look like it was cheery-picked (e.g. reporting only programs that have discipline-based accreditation). 

It was noted that CS 2.10 implicitly has learning outcomes reporting requirements, making it a pseudo-IE standard. I included this in my recommendations for 'fixes' to the Principles in my letter to SACS, posted here for comment. Not many institutions seem to be flunking it, though, unlike 3.3.1.1 (see below).

The fifth year report was highlighted in a break-out session. You can find additional slides on this topic on the website here.  Out of 39 institutions, 28 were cited on 3.3.1.1, and alarmingly, the number of citations on the QEP Impact Report is 33%. Although this says that the review process is no  piece of cake (which is good--it should be meaningful), it points to a problem. In fact, the rationale for the Small College Initiative is to help address this problem, which is particularly acute for small schools. As a side note, over lunch I talked to an IR director who speculated that there is a bias against citing large schools, particularly ones with high rankings. It would be really interesting, in conjunction with the inter-rater reliability study I fantasized about above, to have blind reviews of 3.3.1.1. Given the growing emphasis on student learning outcomes (including the new credit-hour rules), a whole separate system for learning outcomes may need to be developed. One of the challenges on the horizon, in my view, is the contradiction of grades. On the one hand they are the basic unit of merit for courses, with a vast bureaucracy behind them. By contrast, grades are not seen as 'real' assessments. This needs to be fixed. I don't know if Western Governors University's model is the answer, but what we have now makes no sense, and it is impossible to explain to the public.

Reasons given for flunking a QEP included:
  • Bad planning, which leads to a bad report. One kind of bad plan is one that's too broad. 
  • Failure to execute it, e.g. if a new administration comes in and lacks enthusiasm for the old project
  • Not talking about goals and outcomes in the report. Hard to believe.
  • Not describing the implementation (just narrating the creation, perhaps)
  • Not collecting or using data
  • Bad writing. Ironic, since so many QEPs are about writing.
 Tips for writing QEP impact reports:
  • Follow the directions given in the SACS policy
  • Address all the elements
  • Keep narrative to 10 pages. (You can apparently link out to other documents, which I hadn't heard before. I thought everything had to be in 10 pages.) [Edit: see the update below]
  • Use data, but include analysis--don't just put in graphs with no explanation.
Networking over lunch, I gleaned a couple of nifty ideas. At one institution, faculty contracts include a 'gotcha' clause, which stipulates that if assessment reports are not done by date X, then the prof has to stick around until date Y to finish them. This provides an incentive to get them done. Also, the reports are broken down into phases across the academic year, so that not everything is done at once. Smart.

Update: Mike Johnson posted a note to the list server saying that the links in the 10 page (max) Impact Report can only be internal to the report itself, which does not allow 'extra room'. In his words:
Links within a disk or flash drive are okay as long as the documents that are part of the link are included in the ten page maximum length. So please do not use hyperlinks to documents as a means to lengthen the report.

No comments:

Post a Comment