Despite the lofty goals of Transparency by Design's leaders, though, the program has been ensnared in exactly the same kind of concerns that have imperiled or at least impeded other discussions of college learning outcomes -- difficulty in defining common outcomes among diverse groups of institutions, and worries about whether disclosure of some information will make some colleges look worse than others, hurting them competitively.The result is that one can find program learning outcomes for American Intercontinental University (and other participating schools) and see results as an average. For example, outcome 1 is "Demonstrated knowledge, understanding, and ability to apply the principles and processes involved in the career focus of the transferred diploma or certificate program of study." This is rated on a the scale 1= Needs Improvement; 2= Meets Expectations; 3= Exceeds Expectations. The average for the program is 2.0.
This is admirable. It is hard to find examples of learning outcomes with ratings on the web. I published Coker College's with permission back in March here, but it's rare. From an IE perspective, what's missing on the summary pages at Transparency by Design's site is the analysis, actions and improvements that result from doing the assessments--the closing the loop thing--, but these would probably be judged too sensitive to be routinely made public.
There were some unfortunate design choices, which are uncommonly common in outcomes assessment. First is to use a scale that doesn't reflect the hoped-for progress of learners. The "needs improvement" to "exceeds expectations" scale will tell us who are the best and worst students in each class, but won't tell us what the final result of the curriculum is. The differences in averages (2.- economics, 2.5 critical thinking, 1.8 - marketing) probably reflect the relative difficulty of the curriculum and assessing it. For critical thinking, the outcome is fuzzy and (at least here) ill-defined, probably resulting in over-scoring.
The second problem is reporting out outcomes as averages (statistical goo). There are much more useful ways of doing that, which I've written about before.
These design issues aside, I think this is a model that higher ed in general should emulate. If nothing else, the exercise focuses our attention on how to communicate our product to the masses. How do we explain to Mr. and Mrs. Pocketbook what we expect a math major to be able to do, and how successful we think our students are at doing it? This is a big step, even when comparisons to other institutions are off the table. A kudos to the Transparency by Design institutions. I have a feeling the accrediting behemoths are discussing such ideas behind the oiled oak doors of the inner sanctums. Assessment positions are going to be a growth industry for a long time, methinks. Is there a PhD in it yet? Maybe your institution can be the first.