Friday, May 28, 2010

Fixing Assessment

An article this morning in Inside Higher Ed takes on the issue of faculty willingness to engage in meaningful assessment, citing an essay by Pat Hutchings.  I wrote about that article "Opening Doors to Faculty Involvement in Assessment" last month here.  My summary was:
Despite the odd misstep into virulent bureaucracy and too much enthusiasm for top-down assessments not tied to objective top-level goals, the article gives excellent advice for building the culture of assessment our accreditors are always going on about.  The recommendations are practical and useful, and can address the assessment problem at the troop level.  The big issue of getting the top-down approach fixed is not the topic of the article, and given where we are, is probably too much to hope for. 
 The Inside Higher Ed article complements Hutchings' findings with comments by Molly Corbett Broad, president of the American Council on Education.  She makes a point that is obvious by underappreciated:
[I]f faculty do not own outcomes assessment, there will be minimal impact on teaching and learning and, therefore, on student achievement
If the chain of logic [behind learning outcomes assessment] begins from an accountability perspective, the focus is on the institution, and if it is primarily an institutional measure, it is potentially disconnected from how individual faculty members teach
 Blackburn College's experience in trying, failing, and trying again, to find that formula are described.  In a linked document, we find this telling statement.
Assessment came to the college as an ill-defined external mandate that was perceived to be an adversarial process with connotations that threatened these core faculty concerns. Many faculty members raised questions about how such a mandate could be incorporated into college policy and practice without violating the fundamental values of the institution. 
The politics of assessment from the Dept of Ed on down have not been very conducive to positive change.  This is actually easy to fix, I believe.

Fixing Assessment.  There needs to be a clear and achievable expectation at each level of administration.
  1. The Dept of Education should not be concerned with classroom learning outcomes.  This is not a strategic goal, and can't be measured in a comparable way.  Rather, state clearly in an updated strategic plan what the vital interests of the USA are with regard to higher education.  Is it more high-paying jobs for graduates?  Then give universities feedback (using IRS data) about how well we do in that regard.  Is it technology?  Then emphasize science and math like we did in the space race.  Is it energy?  Appropriate measures are numbers of graduates in disciplines, number who attend graduate school, etc.  Do you want them to be better citizens?  Track voting records and give the universities feedback.  Engagement with the world?  Use carrots and sticks to foster languages and intercultural studies (or whatever is important).  Give us goals that are very easy to understand and turn into actions.
  2. Accreditors have done a good job of underlining the importance of assessment, but have been caught in two traps: the fuzziness of the goals from the top, and the perception on the part of faculty that this is all a top-down exercise.  This is the point made above, and in my experience it's completely valid.  Once the DoE defines clear and actionable goals, regional accreditors can focus on fostering faculty-supported assessment that will improve classroom teaching, and forget about "accountability."  They can perform a tremendous service this way.
  3. Institutions would have good information from the DoE about how they fit into national priorities.  Some institutions won't fit that profile, and others will, but either way the goals will inform and guide institutional strategic planning on some level.  (Do we want to start a computer science program and try to get federal grants? etc.).  If there is a DoE goal related to employment of graduates, this is probably the most general sort, and any institution should be able to engage with that.  On a lower level, the administration has to make sure that the assessment programs are working effectively.  That doesn't mean heavy bureaucracy, expensive online systems, or a raft of standardized tests.  Institutional data on learning outcomes is easy to get.  As evidence of how easy it is to get useful assessment data from faculty for free, see "FACS Redux."  With three emails and a few department meetings, we got (at last count) 2731 learning and non-cognitive outcomes ratings on a base of fewer than 1500 students.  It tells us what the faculty really think about students in an institutional format.
  4. Academic programs are where assessment and improvement mostly happens.  In my experience, once faculty are guided in a way that makes sense to them (see "Culturing Assessment and Scoring Cows"), a light comes on, and most can see the importance of it.  Not all faculty or all programs will do a perfect job of it, but I believe with cultivation the right attitudes and practices can be developed institutionally.  It means staying away from language like "accountability" and "measurement," and focusing on what specifically faculty want students to learn in individual classes.  It's not hard.
Instead of this, what we have is a confusion of priorities, methods that don't make sense, and alienation of the people we're trying to engage. It's political, messy, and largely irrational: rather like any faculty senate.  We can do better.

Update: see Assessment Crosstalk for analysis of the comments to the IHE article.

No comments:

Post a Comment