Skip to main content

404: Learning Outcomes

tl;dr Searched SACS reports for learning outcomes. Table of links, general observations, proposal to create a consortium to make public these reports.

In grad school there was a horror story that circulated about a friend of a friend of a cousin, who was a math grad student in algebra. He had created a beautiful theory with wonderful results, and was ready to submit when it was pointed out to him that his axioms were inconsistent--they contradicted one another. The punchline is that you can prove anything of the empty set. This sometimes also happens to degree programs that suddenly have to prove that they've been doing assessment loops, except in reverse: building grand theories from the empty set.

I complained the other day the there weren't many completed learning outcomes reports from universities to be found on the web. So when I noticed Mary Bold's post at Higher Ed Assessment "Reporting Assessment Results (Well): Pairing UNLV and OERL" I thought I'd hit paydirt. The hyperlink took me to a page at University of Nevada, Las Vegas with a link advertising "Student Learning Outcomes by College." Without futher ado, here's the page:

That's just too funny. There are, however, excerpts from actual results listed in the teacher education site, which you can find here. That site is the OERL that Mary refers to in her post.

It did make me think, however. There must be a bunch of SACS compliance certifications out there on the web now, and section 3.3.1 (used to be 3.4.1) covers learning outcomes. Want to see how your peers have handled it? The name of the school in the table below links to the compliance certification home page for that institution. For good measure I'll throw in 3.5.1, general education assessment, too. You're welcome.

Institution3.3.13.5.1
Southeastern Louisiana Universitylinklink
Western Kentucky Universitylinklink
Berea College
linklink
The College of William and Mary
linklink
The University of Alabama in Huntsville
linklink
Nashville State Community College
link
link
Mitchell Community College
linklink
The University of Texas Arlington
linklink
University of New Orleans
linklink
Albany State University
linklink
Bevill State Community College
linklink
Louisiana State U. In Shreveport
linklink
Texas Tech University
linklink
Coker College
linklink

I did not try to make a complete list of all available reports. If you find a good one, send me the url and I'll add it. Here's my google search criterion.

Disclosure: I was the liaison, IE chair, webmaster, and editor for the Coker College process (as well as doing the IR and a bunch of other stuff--no wonder I have white hair). The linked documents to that one are turned off for confidentiality, but you can find the complete list of program learning outcomes plans and results here.

Observations:
First, hats off to all the institutions who make these reports public. This is a great resource to anyone else going through the process.

I only scanned through the reports, looking for evidence of learning outcomes. I probably missed a lot, so take my remarks with a grain of salt--go look for yourself and leave a comment if you find something interesting. It should go without saying that in order to be helpful, this has to be a constructive dialogue.

For learning outcomes I didn't find as much evidence-based action as I would have expected from all the emphasis that SACS puts on it. My own experience was that programs were uneven in their application of what is now 3.3.1 (at the time SACS didn't even have section numbers for the requirements--how crazy is that? I invented my own, and then they published official ones just before we had to turn the thing in.). So there was a lot of taking the empty set and trying to build something out of it. That can take various forms, which one notices in scanning certification reports:
  • Quick fixes: use a standardized instrument like MAAP, MFAT, CLA, NSSE. Of course, it's not really that quick since it would take at least a year to get results, analyze them, and use them. The conceptual problem is tying results to the curriculum (except for MFAT).

  • Use coursework: passing a certain course certifies students in X (e.g. use of technology), passing a capstone course with grade X ensures broad learning outcomes. This is fairly convincing as gate keeping, but hard to link to changes unless specific learning outcomes are assessed.

  • Rubric-like reporting. Okay, I'm not a big fan of rubrics when employed with religious zeal and utter faith in their validity. But I have to admit that the most convincing report summary I saw on learning outcomes was the one below from Mitchell Community College. Not all the data points are there, but that's realistic. Take a look.
Of course, this still has to be tied to some analysis and action to get full points, but the presentation of the learning outcomes is clear and understandable. In general, that was somewhat of a rarity in my cursory review. What there is a LOT of is plans, new plans, minutes describing the construction of new plans and goals, assessment forms, models and processes, and generally new ambitions and enthusiasms. There are standardized test reports like CLA summaries, which solve the data and report problem, but don't touch the hard part: relating it to the practice of teaching in a discipline.

I believe that if our efforts as assessment leaders are to be maximally useful, we have to make the annual, messy, incomplete, inconsistent, but authentic program-level plans and results available to the public. This would encourage us to adopt some kind of uniformity in reporting, and improve the quality of the presentation (maybe I'm a fool for saying that). The only downside is that if we're honest, there will be empty sets here and there--programs that have not been dragged into the 21st century yet. But transparency can help there too, perhaps into shaming some into compliance. Just imagine (really dreaming now) if the quality of the reports were good enough to use for recruiting and paint across the program web page.

The Voluntary System of Accountability tries to do something like that. Unfortunately, that group seems to be enamored of standardized tests for learning outcomes. There's a validity study they just published here that you can consider. This post isn't the place to go into all the reasons I think standardized testing is the wrong approach, so let me just leave it at that.

Thinking more positively: is there any interest out there to form a loose consortium of schools that report out annual learning outcomes for programs? The role of the consortium could be to settle on some standard ways of reporting and defining best practices?

Comments

  1. Fascinating topic, but I really have a hard time with "404s" at Coker -- why is the Faculty Manual off-line? How can I link to your SACS faculty rosters?
    Glen S. McGhee, FHEAP

    ReplyDelete
  2. Glen, There's no reason for those particular docs to not be linked. When I got approval to post the report online, it was with the understanding that sensitive documents like board minutes would not be included. At the time, the easiest thing to do was simply not upload the reference document folder, so I wouldn't have to inspect the thousand or so documents one by one to see what should be protected. Coker did make its learning outcomes reports available in their entirety, which is very unusual. I don't actually work there anymore, so it's not in my power to go back and link to the faculty manual or rosters.

    dave

    ReplyDelete
  3. "I don't actually work there anymore, so it's not in my power to go back and link to the faculty manual or rosters."

    I am very sorry to hear that, although it doesn't surprise me.

    Often, IE folks are like hired guns, that come into town to clean it up on contract, make lots of enemies getting the hapless school through review, only to be dumped over the side at the first opportunity. It is not a life for the faint-hearted.

    I quite enjoyed your Assessing the Elephant ...

    ReplyDelete
  4. Actually, it was a very tough decision to leave. Coker is a wonderful place to work, and I'd been there 17 years.

    Thanks for the note on the Elephant. Have to find time to update it.

    ReplyDelete
  5. This is very interesting as my community college has not yet been required to perform Learning Outcomes - a colleague and I attended a conference for student housing and Learning Outcomes was discussed. I never heard of it before then, and just now I'm looking for more information so my college can get a head-start.

    The questions I have so far are:
    1. What departments are required to assess?
    A> Looks like the answer is "all non administrative departments" from what I've read so far...

    2. How often does the assessment have to be applied?
    A> Annually...?

    3. Whom do we report to?
    A> SACS?

    4: From whom will the requirement come from?
    A> SACS...?

    5: What format must the assessment reports be in?
    A> ???

    Any input would be appreciated, we'll keep searching. Thanks so much for the article and the links.

    The game is afoot!

    ReplyDelete
  6. Cannon, I am most familiar with SACSCOC standards, so your mileage may vary. All academic programs need learning outcomes plans, as well as student-services. Assessment cycles are typically annual, but the activities happen all year long. You would be responsible to your regional accreditor's standards. For the format of assessment reports, it's generally three or more objectives of this form, but there are a number of variations:

    Date
    Program Name
    Coordinator Name
    Statement of Objective
    Assessment method for the objective
    Assessment results
    Analysis of assessment results
    Actions planned to be taken
    Follow-up: what was actually done?

    It takes a whole cycle to be able to write down all of this. Up through "assessment method" is the plan of action, and everything afterwards is what actually happened.

    See "Assessment Essentials" by Banta and Palomba for a good reference--one of many out there. There's also an email list called ASSESS-L you can join.

    Good luck!

    ReplyDelete
  7. Thank you kindly Dave! This is superb; more than I hoped for. I'll certainly look up "Assessment Essentials" by Banta & Palomba.

    I look forward to checking your other writings - you've made a fan of me. :]

    Big thanks from West Texas!

    ReplyDelete
  8. You're quite welcome! My contact information is on my resume (upper left on this page). Please call or email if there's something I can help with.

    ReplyDelete

Post a Comment

Popular posts from this blog

Course Grades as Data

Introduction To borrow a phrase from John Barth, within assessment circles there is a pernicious enthymeme that grades don't matter. Course grades are discounted outright as useful data about learning, or they are relegated to the purgatory of "indirect evidence." This ban is one of the data purity rules that also excludes surveys and really anything that is not:

tied to a specific piece of students work, andclassified by grading, rubric, etc. in an approved manner.  This is a standardized testing approach that retains only part of the standardization: (1) common point-in-time student work, and (2) a similar-looking rating method. I say "similar-looking," because a real standardized approach would evaluate reliability to ensure that the ratings have some statistical stability. This is rarely done in assessment practice. If it were done, it would reveal that the reliability is quite low most of the time.
There are standard rhetorical objections to using grades, …

Thinking in Predictors

There are some ideas that help us see the world more clearly. For example, the idea that a flipped coin has no memory of past events helps us realize that a recent string of heads does not increase the likelihood of tails (if the coin-flipping is done fairly). Thinking in predictors is one of those ideas that can make complex problems a little easier to think about.

This post is an introduction to how to visualize predictive power in the simplest case, where the outcome we want to know about only has two values. Examples include predicting first year student retention, student graduation, and applicants enrolling after being admitted. Having only two outcomes means there are only two possible predictions in a particular case--we predict that the outcome is one or the other of the two possibilities.

Let's suppose that you want to predict which of the new first-year students will return for their second fall. Suppose for a moment that I have a crystal ball and know the answer for ea…

ROC and AUC

This post continues the previous topic of thinking in predictors. I showed a measure of predictiveness, called AUC, which can be thought of as the probability of correctly guessing which of two different outcomes is which.Here I'll show how to visualize predictive power and relate the graph to that probability.

Suppose we want to predict which students will do poorly academically in a gateway course, where we define that by the student receiving C, D, or F. If we can predict that outcome far enough in advance, maybe we can do something to help. Note that this is a binary outcome (i.e. binominal), meaning there are only two possibilities.

The predictors we might consider are class attendance, use of tutoring services, prior grades, etc. I'll use the first two of these as illustrations. For example, back to the cups of the prior post, suppose one cup has the ID of a student who earned a C, D, or F, and the other cup has the ID of a student who earned an A or B. You don't kn…