Monday, December 14, 2009

Link Salad, Analytics, and Pell

Some links of interest for the numerically inclined:
Also, in "Want a Job? Analytics is the Thing, Says IBM" we find that data mining and business analytics are the new plastic:
“In this world, intelligence is replacing intuition,” said Ambuj Goyal, a former IBM researcher who is now General Manager of IBM’s Business Analytics and Process Optimization Group. 35,000 people now report to Goyal, according to IBM.

Simoudis believes the demand for these jobs will only grow thanks to several big trends. One is the sheer data explosion. When Simoudis was working in the software business in the 1980s, he said data warehouses use to handle two terabytes of data. Today, just one small online ad network is generating 100 terabytes of data, while social network Facebook is spewing out 1.5 petabytes of data a year, or 1,500 terabytes. All those status updates and party photos consume massive amounts of data.
I see a new learning outcome for general education... At least it's an argument for requiring computer programming in addition to math.

Finally, the story you probably saw in InsideHigherEd "Pell Costs Explode":
Obama administration officials confirmed on Thursday that unexpectedly strong demand for Pell Grants would sharply increase government spending on its primary need-based student aid program, requiring an extra $18 billion over the next three years.
The article links the overage to the economic hard times, and this is likely valid. But I wonder how much of it is due to the cyberdons (i.e. online for-profits). Surely, with their burgeoning market share of the lesser-qualified student, they're creating more of a market for Pell grants. If so, this pressure will not abate, but force Pell grants to be capped eventually, hurting the traditional schools--an indirect effect of competition. There are, ultimately, a finite amount of resources.

Friday, December 11, 2009

Survey Addresses Drop-Outs

Public Agenda's recent report "With Their Whole Lives Ahead of Them" is subtitled "Myths and Realities About Why So Many Students Fail to Finish College." It's essential reading for anyone interested in student retention. Citing an average 40% six year graduation rate for four-year degrees, the report tries to answer the "why?" question with a survey of 600 young adults who had first hand experience. The complete methodology can be found here. The report is released under the creative commons license.

The demographic characteristics of those surveyed belie the image of a typical college student. Quoting from the article:
  • Among students in four-year schools, 45 percent work more than 20 hours a week.
  • Among those attending community colleges, 6 in 10 work more than 20 hours a week, and more than a quarter work more than 35 hours a week.
  • Just 25 percent of students attend the sort of residential college we often envision.
  • Twenty-three percent of college students have dependent children.
The main part of the report is framed around "myths and realities," such as:
MYTH NO. 1: Most students go to college full-time. If they leave without a degree, it’s because they’re bored with their classes and don’t want to work hard.

REALITY NO. 1: Most students leave college because they are working to support themselves and going to school at the same time. At some point, the stress of work and study just becomes too difficult.
According to the survey, "Those who dropped out are almost twice as likely to cite problems juggling work and school as their main problem as they are to blame tuition bills (54 percent to 31 percent)."

Graphs display ranked survey items. A portion is shown below.


The third section addresses an issue that I've discovered independently.
MYTH NO. 3: Most students go through a meticulous process of choosing their college from an array of alternatives.

REALITY NO. 3: Among students who don’t graduate, the college selection process is far more limited and often seems happenstance and uninformed.
My retention study at one institution showed that students whose had reported on the CIRP that they were at their first-choice college had three other interesting characteristics. One was that they tended to be first-generation students. They also were by far at the highest risk for attrition. And in a subsequent survey two months after the CIRP, many of them had changed their minds about the first-choice qualification. In short, they were uninformed consumers and became quickly disaffected--or at least, that's the way I interpreted the data.

The survey asked for proposals to help other students get a degree. The top responses are shown below.

The other two "myths" presented in the report are:
MYTH NO. 2: Most college students are supported by their parents and take advantage of a multitude of available loans, scholarships, and savings plans.

REALITY NO. 2: Young people who fail to finish college are often going it alone financially. They’re essentially putting themselves through school.
and
MYTH NO. 4: Students who don’t graduate understand fully the value of a college degree and the consequences and trade-offs of leaving school without one.

REALITY NO. 4: Students who leave college realize that a diploma is an asset, but they may not fully recognize the impact dropping out of school will have on their future.
Much more information and analysis is available on the report itself. I'm not crazy about post hoc retention surveys because if we want real predictors, we need to find out information before students leave, and afterwards causes and effects may change over time in the minds of the students, as with the first-choice question noted above. On the other hand, this report is thoughtfully done, and does seem to illuminate some interesting issues.

There is a section on what can be done to help. Providing more financial aid for part-time students is one, as well as more flexible options for attending. I presume that online courses would fill the second bill nicely. It seems obvious that more work-study options on campus would help too--a double win for the college, since students will be more engaged, and provide cheap labor. Addressing child-care problems for students ought to be high on the list too.

Thursday, December 10, 2009

SACS 2009

The annual Commission on Colleges meeting was in Atlanta this year, and I drove down from Charlotte with a colleague under blue skies. Drove back in the rain. In between, the days and nights were packed. I had optimistically taken along one book on higher ed, one text on complexity theory, two sci-fi novels, and a briefing book of articles on college enrollment (is there a word for fear of being stuck somewhere without something to read?) I got through a total of about six pages, I think.

The sessions were mostly good, and the early (as in 7:30am) round tables were even better. It's an open secret that the round table discussions can be the best part of the conference. I tried to find sessions on the SACS five year report, which is a new requirement for reporting mid-cycle. Here are some of the more interesting bits from my notes:
  • You can change the QEP mid-stream. Radical change, like revising the goals isn't recommended, but it is accepted that institutions change, and plans don't always work as intended. Liberty University gave a presentation on this.
  • After the five year report, the QEP is no longer relevant to accreditation. Some projects may be over after five years. Others are property of the institution, to do as it pleases. Some will become institutionalized, others quietly dropped. Basically, after the report, it's time to start thinking about the next one.
  • For the pilot institutions--the first to come under the Principles of Accreditation--no one flunked the impact report on the QEP except for institutions who simply didn't execute it at all. On the other hand, several sections of the limited compliance certification that goes with the report were problematic, including documenting policy for handling student complaints and properly addressing distance learning programs.
  • Some institutions have prepared drafts of the impact report and are willing to share. For example, University of West Florida has a wealth of public documents about their QEP here.
  • On the SACS website, under Institutional Resources, there are the official documents about the report. The direct page is here, which includes report instructions and timeline.
  • SACS voted to change the rules to make it easier to pass the QEP (section 12 of the Principles). This is a technical change that allows recommendations to be made about the QEP proposal during the decennial reaffirmation process without triggering a full-fledged punitive sanction. This doesn't have anything to do with the five-year report, but signals that SACS is being reasonable about the requirements.
  • The five year report is encouraged to be electronic (CD or website, but don't do a website), but should be self-contained. We are not supposed to submit in both paper and electronically, unlike my experience with the compliance certification, where I learned at the last minute that they wanted both (in addition to renumbering all the sections). Any electronic report should be user friendly. I'd like to underline that. The typical higher ed admin is NOT tech-savvy. Use low-tech solutions. I still like paper, based on my experiences with review committees, and don't see any advantage to risking electronic submissions. We were advised that whatever we present should not depend on links to the main university site--the report has to be self-contained, even if electronic.
There's much more from the conference, which I'll get to later. Let me sign off with some proof of my assertion that SACS attendees aren't much into technology. I used one of the commons computers at the conference to browse to Twitter to see what the backchannel chatter was. Here's all that's there:
One lousy post. Compare that to #educause...

Monday, November 30, 2009

Research and Ed Blogs

I haven't been blogging here much because I've been spending time on a research project. It requires a lot of programming and machine-time to actually run the programs. I have it down to a science now, so at this point it's mostly tweaking parameters to see what happens. Basically, I created an artificial life workbench to test out some ideas about evolution and survival in a computational context. You can read background here if you're interested. The "artificial life" consists of little computer programs that have to solve certain tasks to survive. They reproduce and mutate if successful enough. The special programming language is very concise, and the critters look like >-]- and +#}O{]<>}. I chose the symbols for their (mostly) bilateral symmetry, so if you squint they actually do look like weird life forms. The first one, when run in the environment does this:
  • > Look at the old environment (looks like feelers, no?)
  • - Subtract one from it
  • ] Skip to the end unless the result is zero
  • - Subtract one more if it was a zero
  • Output the result
The environment is only ever 0,1, or 2, so the critter maps 0 or 1 to 9 (zero minus one is nine) and 2 to 1. This is exactly a recipe for survival. It took 109 generations to evolve, and only happened once in a 100 trials.

In other news I came across a nice list of education blogs and news sites. I haven't gone through them all yet, but many I haven't seen before. They are not all higher education. The list is here.

I also discovered Crossing the Finish Line, described in one review on the Amazon page as "The most comprehensive look yet possible at the determinants of graduation rates--and what might be done to improve them." It looks fascinating, but I've only been able to take a peek so far.

Also of note: NPR's piece "Who Needs College, and Who Shouldn't Go?"

Friday, November 20, 2009

Increasing Transfers

A Chronicle article "Report Highlights Characteristics of Colleges With High Transfer-Success Rates" cites a new publication from The Pell Institute about successful transfer programs. The new report hasn't shown up on that organization's publications page yet, but there are some interesting tidbits in the review:
The study found that early exposure is critical to ensuring a successful transition to college, especially for students who are from low-income families or are the first in their families to go to college. Such students are likely to be unfamiliar with higher education and what it will take to earn a bachelor's degree.
This echoes what I found in a retention study--first generation students didn't really understand the product they were buying, and quickly became disenchanted once they arrived (here).

The article lists some elements of successful two-year programs that built bridges to four year programs:
  • specialized advising
  • flexible scheduling of academic and support services
  • first-year seminars that include strategies on note taking, test taking, and navigating campus services
  • one-stop shops, where services such as registration and financial aid are placed together in one central location
  • replacing a tall customer-service counter with desks to make the interaction between students and staff members more accessible and personal
  • offering clubs and organizations
  • setting aside an hour each day when no classes were scheduled to further encourage participation
  • employing faculty and staff members of similar backgrounds to their students
I imagine that many of these elements would work from the other perspective too: implemented for students at four-year institutions who seek to increase transfers from two-year institutions.

Thursday, November 19, 2009

Communication Good, Communication Bad

Two articles in the Chronicle at the top of the most viewed list are interesting to contrast. First is "Turnaround President Makes the Most of His College’s Small Size" about G. T. Smith as President of David & Elkins College:
[I]n higher education, Mr. Smith is known as a turnaround artist, a man with the talent and disposition to take a failing college and transform it into a winner.
Quoting Smith, they give the reason for his success as "The underlying thing for me is relationships—hardly anything important happens that doesn't have to do with relationships." For example:
After years of stagnant enrollment at Davis & Elkins—which had developed a dismal local reputation, according to some local high-school counselors—the freshman class was up 50 percent this fall. As of November, the number of applications was more than seven times higher than at this time in 2007, and eight students had already put down deposits. Consider that those numbers came after the college had canceled its advertising campaign and done away with mass mailings in favor of a highly personal approach to recruiting students: getting to know their names, their parents' names, their dogs' names, and conveying the message that at this college of 700 students, you're part of a family.
A bit off-topic, but equally fascinating is his take on the changing landscape of higher education, which I've speculated wildly about in these virtual pages. Pres. Smith's version:
"We can't in a Pollyannaish way say, 'The liberal-arts college will always survive.' We are all under threat or under siege," he says. "It comes down to whether you are going to look at your future based entirely upon your past or what others are doing, or whether you are going to look at the fundamentals, the principles, the basics, and have the discipline to stay with those."
That was "Communication Good." Now the other one.

A while back I blogged happily about the backchannel--the emergence of live chattering by text, Twitter, blogging, etc. during presentations, class meetings, or any occasion where people have the means and time. It's the 21st century version of passing notes and whispering.

Today's references are:
You can gather from the titles what happened. In the case of the last article on the list, a presenter at Web 2.0 Expo was giving her talk with a huge screen in the background showing the conference Twitter channel feed. When some tweeters started making disparaging comments on the backchannel, she didn't understand why the audience was reacting oddly to her talk, and got understandably flustered. The other articles talk about other instances of this "tweckling," a neologism I find kind of cute, even if the behavior is reprehensible.

Interestingly, browsing Technorati for "backchannel" only produces positive-sounding hits, with advice for speakers on how to build one, for example.

The moral of the story? It would be ridiculous to try to generalize anything based on these two completely different articles about communication, so I can't resist. The difference is between high-bandwidth, high-stakes (personal responsibility for words) communication and low-bandwidth, low-stakes (essentially anonymous) communication. It's not that the first is good and the second bad, just that they're very different. And the no-man's land in between--email--is the worst of both worlds: high stakes and low bandwidth. Ever had a joke or off-hand comment interpreted differently than you intended it? Add to that the ease of someone sending your email to a third party, and the potential for mischief multiplies.

One of the articles notes that some conferences post rules of etiquette for the backchannel. Of course, if the tweeters are anonymous, this isn't going to have much effect. In the end, though, there isn't true anonymity on the Internet. Pretty much anything can be traced if there is enough interest in doing so. Best policy, in my opinion, is: if you think it's important to say it, put your name on it, be clear, and be polite. I probably fail on the second point, but it's a work in progress. The best payoff would be to get almost as much mileage out of low-bandwidth communication as Pres. Smith gets out of high-bandwidth communication. That's a powerful idea that could spawn a sea of consultants: maximize the effect of short, text-based, remote communication.

Of course, this is the Internet; someone's probably wrote a book on it already.

Wednesday, November 18, 2009

Grading and Assessment

The topic of how grades and assessments could be aligned has been mentioned here before (see "Assessing away Grades"). Nils Peterson pointed me to an active discussion on the topic on HASTAC called "Grading 2.0: Evaluation in the Digital Age." There are some good links there, topic questions, and several comments. One of the discussion points asks:
3. Can everything be graded?
- How important is creativity, and how do we deal with subjective concepts in an objective way, in evaluation?
Here, I think we run smack into the problem. It goes like this:
  1. Grades have economic consequences for both students and teachers.
  2. Because of this, grades have to be defensible during a challenge and review process.
  3. Because they have to be defensible, grades have to have at least the appearance of objectivity.
  4. However: the best assessments should be free from economic influence, and may be subjective (see the whole Assessing the Elephant thing).
This problem only rears its head for complex learning outcomes. If you're teaching multiplication tables, it's not a problem to create objective (maybe even valid and reliable) instruments. What about creativity, however, as posed in the question above? Can we really slice up that concept into "dimensions" and rubrics that capture the essence of what creative genius produces?

It's instructive to see how grades are assigned in the fine arts and performance arts. There simply is a lot of subjectivity. I'm generalizing from limited experience, but I think that the key is the attitudes and methods the assessor uses more than the actual assessment. For example, if I say "your work is all derivative and boring," it's very different from saying "to my taste, this doesn't excite me." The former sounds like an objective statement, and the later is clearly subjective. Students aren't stupid, and they know that there's a difference: dressing up subjectivity as objectivity only irritates them. What I've seen from successful art-type assessments is that the effort put into the work counts for a great deal. Creativity is a kind of exploration, perhaps, requiring trial and error, and therefore time invested. Art profs want to see portfolios, sketchbooks, incomplete works, anything that shows that the student is engaged. And the judgment of how much work one has done can be fairly objective; it's something you can talk to the student about and reach agreement on. Of course, the quality of the engagement counts as well, but to some extent I think that comes out also, if one can review all the cars in the whole train of thought. This is certainly true of teaching math at the upper levels. It's a thrill when a student comes to you with "I thought of this problem and tried to solve it. Will you look at it?" Whereupon you're presented with scruffy bits of paper (mathematicians will write on anything) with formulas all over them. Most of them are wrong--false starts. It's like what one of my art colleagues described looking at sketchbooks is like: it's raw and unprocessed, and more powerful than a finished work.

So it may be that there is a natural division between objective and subjective assessments and grades. The former are relatively easy. But maybe for more complex outcomes we need an approach more like that of art: look at not just a finished product on a test or paper, but demand to see the corpus of work, mistakes and all, that led to it. Technology can obviously help with this because information is cheap to store "forever." Portfolio systems as they are generally currently conceived are not really the right tool for this--what you'd want is a virtual space for storing documents and imposing a bit of structure on them. Perhaps a mind-map hyperlinked to documents and meta-data tags on the whole thing, so it can be sorted and presented by different facets. Add the ability for an instructor to freely annotate these nodes and artifacts with hidable notes, and it starts to sound attractive. At any rate, this is not the kind of problem that another scoopful of rubrics can solve.

Saturday, November 14, 2009

Planning Resources

As we get into the season of setting tuition, finalizing aid policies, tweeking strategic plans, and prognosticating about enrollment, it's handy to have a reference library.

Some of these links have been cited before, but I wanted to put them all in one place. I will likely add to them without bothering to add "update" to the post, to make this a reference page of sorts. If you have a good one, please forward it.

Strategic planning:
Retention/Success:
Recruitment:
Pricing:
Financial Aid:
Web Strategy:

Friday, November 13, 2009

Quote, Representation

Reviews of an institution's general education (or liberal arts requirements) might be characterized as nasty, brutish, and interminable, to paraphrase Hobbes. You can get a taste of it in "Progress, But No Votes, For Review" in the Harvard Crimson, about that institution's journey to re-enlightenment. And then there's a fascinating set of minutes from "WCU" here that documents the tortured trip:
The creation of a dynamic general education program that the faculty will "buy into" can perhaps change the University culture in a way that talking cannot. So, how are the faculty enticed to "buy into" a program? It is necessary to ignore the noisy 15-20% who fight any change in the status quo. [3/21/97 Minutes]
Also found in this rich set of minutes is a bibliography of recommended sources for those foolish enough to buy a ticket on this train:
The committee will look first at AAC's New Vitality in General Education, the first half of Weingartner's Undergraduate education: Goals and Means, AAC's Strong Foundations: Twelve Principles for Effective General Education Programs, and Gaff's New Life for the College Curriculum.
This was, of course, before the AAC&U released its LEAP initiative.

There is a virtual thicket of documentation to be found via google search, a salad of despair, Thomas Pynchon might say (see "Meeting Salad" for the quote).

My particular interest this morning was the composition of gen ed review committees, but it's easy to get lost surfing the epics struggles documented online--surely enough material for several posts. Heck, I should invite applications for a gen ed review "sub blog."

The practice I'm accustomed to is to select a representative from each large academic unit (a department or division, for example) that has content usually found in general education. This would include sciences, humanities, social sciences, and PE. Our accreditor, SACS, actually has a list of minimum requirements for a gen ed program.

Here's an example from "MU":
The General Education Review Committee is composed of two faculty representatives from each of the four academic units elected by Faculty Senate for overlapping terms of two years, one representative from non-school faculty elected by Senate for two years, and two student representatives elected by Student Senate for overlapping terms of two years. In addition, a chairperson is elected by Faculty Senate from the Faculty Senate membership to serve a three-year term.
There are also non-voting members with administrative titles.

Both the AAC&U and Academic Commons advertise an interest in the liberal arts. The former has a book on the subject of gen ed review, but you have to buy it to see it. The site whatwilltheylearn.com compares general education curricula to see if they get the basics, as defined by a conservative list of subjects.

The issue that I take up in the title of this post is the question of representation. If we assume that the practice of creating a review team in the way described above is common, then the result is a discriminatory process that almost guarantees political turf battles.

Think about the arrogance in the assumption "we'll pick a math guy to represent all you math people, and a history person to represent all of them, and so forth." This is roughly the same as picking a Cubs fan to represent all Cubs fans on how a stadium should be build or picking someone who likes hot dogs to represent all hot dog eaters on how to choose a new menu.

There are many, many design questions for a general education review team to consider, and only a small part of it concerns the number and type of courses to be chosen. With the usual discriminatory practice, the only basis for "representation" is to lobby for one's own area: let's pack in as many math courses as we can, so we can get more faculty slots. Unless the committee can rise above that (and take the heat from their respective departments), the outcome is almost guaranteed to be bland, and the dynamics encourage unhealthy inter-departmental politics. Of course, there's no way to avoid politics. Why not make that a virtue?

In a representative democracy, we don't go out an pick one Toyota lover to go to congress to represent all Toyota lovers. That kind of blatant discrimination would never be tolerated (okay, gerrymandering aside--but that's ugly too). What if we modeled the selection of a gen ed committee on an actual representative democracy? Nominees for the committee slots would be put up for a vote. They would be expected to set out their views on general education, probably in writing, but perhaps in a debate with other candidates too. It could be a vibrant, rich affair that properly celebrates and illuminates the process as it begins. It would also illustrate the use of liberal ideas that were hard fought in the enlightenment instead of forcing an illogical form onto a titanic exercise in critical thinking. The irony of the usual process is eye-popping.

Just like in a representative democracy, the election of the representatives does not by itself create the design. The hard work of design and compromise, and yes, departmental politics, still has to be done.

Of course, this may be too Utopian. Perhaps what would happen is that the initial gamesmanship over the election of representatives would center solely on discipline loadings for the final product. Maybe the big departments would just vote themselves candidates that would beef up their areas. If so, that would be sad, but at least it would reveal the true motivations of the academy.

Update: I just noticed that InsideHigherEd has an article about the "jeopardy" liberal arts is in, citing a need for proof of relevance. They note that some institutions have presentations that resemble the kind of political debate I described in my Utopia:
The University of Alaska at Anchorage has started a lecture series where professors from different liberal arts disciplines give talks aimed at attracting a popular audience. Utah State University has invited successful alumni back to talk about how their liberal education shaped their careers.
It seems that having more open processes for constructing (via faculty procedures) and validating (with authentic assessment) general education means and results, potential students will have a better chance of seeing the point. In a retention study I did (see "The Value of Retrospection"), we ultimately discovered that many of our students simply didn't understand the product they were buying.

Thursday, November 12, 2009

The Memescape

In The Selfish Gene, Richard Dawkins popularized the notion of a 'meme,' which compliments the biological 'gene.' When he took a question about the subject at a talk in Columbia, SC recently, he seemed almost rueful, saying something like "Oh, no. Not the meme question." But there's no turning back now, as the google trends chart shows:

A search on Amazon.com revealed about eighteen books about memes, and there are currently 49 million hits on google, including the wiki page, a Daily Meme site, and Know Your Meme that claims to document Internet phenomena (one might say ephemera). A whole "science of memes" called memetics has sprung up, claiming 487,000 web hits currently.

So what's a meme? The wikipedia definition is just fine:
A meme (pronounced /ˈmiːm/, rhyming with "cream"[1]) is a postulated unit of cultural ideas, symbols or practices, which can be transmitted from one mind to another through speech, gestures, rituals or other imitable phenomena
Remember pet rocks, anyone? Meme. Tickle-me Elmo. Another meme. Obviously, "meme" itself is a meme, and a very successful one. If we think of any medium that will permit complex organizations within it, it seems to find fixed points (self-replicators) on its own. We might call this the cosmic version of Murphy's Law: if a system has fixed-point expressions, they emerge naturally from noise. Feed some noise into a microphone and point it at the speaker. You don't get amplified noise, you get a particular resonant frequency that is tuned to the physical characteristics of the system. This is a powerful idea because you can work it backwards too: look at what emerges naturally from a medium and then figure out what the characteristics of the medium must be.

Ever get a chain letter? That's a particularly interesting one because it's easy to dissect. This idea is described by Dawkins, but I'll paraphrase. A good way to get an idea replicator going is to combine these elements:
  1. An imperative to replicate the idea. This is the basic "reproduction drive." E.g. Send this to ten of your friends.
  2. A carrot. If you perform #1, something good is likely to happen to you. E.g. Cite stories of people who had amazing fortune after forwarding the email to ten of their friends.
  3. A stick. If you fail to perform #1, something bad will happen to you. E.g. This one dude forgot to send the email and a brick fell on him.
Of course, the more fake documentation (a REAL LAWYER friend said "blah blah blah"), and other emotional mumbo-jumbo you can ladle onto the thing, the better chance it has. In order to survive, it needs to multiply its chances geometrically (at least, I think I can prove that if you give me an hour), and hence the replication imperative is structured to multiply senders. Dawkins notes that the elements above are found in most religions.

This is a mind-broadening idea, because we all carry around the burden of memes that are so deeply buried in our nous that we don't realize they are there. Some are undoubtedly hardwired by evolution so we avoid starvation, for example. Not everything you believe can be logical. This is because even logic is based on assumptions (called axioms). Karl Popper ran into this problem when he philosophized about science never proving anything a theory, but rather only disproving theories that didn't work. By his own logic, he couldn't prove that. You have to start somewhere. It's a nice exercise to dig around in your beliefs and see what's there. Could be a great self-assessment assignment for a course that contains noncognitives too. Which leads to next topic.

Memes and the Academy. What does this have to do with teaching, learning, and assessing? Obviously teaching has to do with the transmission of ideas. I think we miss a trick by not trying more actively to transmit habits of mind as well. After all, part of being open-minded is the meta-cognitive ability to examine ones own beliefs. We could do that more deliberately in our curricula.

There apparently are some scholarly activities on this topic, for example "Memes and the Teaching of English, which "Examines why some sayings and catchphrases stick in people's minds, while others are unrecognized and unused. Offers an answer to this question from an evolutionary standpoint."

It's much more natural to me to think of assessment on the meme level, rather than on the cognitive ability level. Individual concepts and skills can be taught. I'm not sure how to teach raw cognitive ability. The assessments of the former are more or less straightforward. Of the latter, steeped in statistical voodoo and ed-psych theory that may or may not have a basis in physical reality.

In practical terms, you can use memetics every day. Here's an example. You're sitting in a committee meeting, where the members are problem-solving. You have some ideas of your own about how the issue at hand might be resolved. What do you do?

First, there are always ideas. Lots of ideas. I'd recommend sorting through your own in your head before advancing one. Make sure it has a chance of survival. Wouldn't want to get the poor meme's hopes up only to be dashed immediately. There is value to producing good ideas--it makes you seem smart--and so your standing in the community is enhanced when you display brilliance. So you may choose to pick your best idea and propose it. This is somewhat Machiavellian because the loading is "Professor Zza has good ideas" rather than the actual idea itself. We all have to play that game early in our careers. Of course, it extends to publishing research, which is more substantive than what happens in committees, but the idea is the same. This is the reason there are lots of meaningless papers produced--the real message is: Zza is really productive.

Once past the need to establish oneself, there is (sometimes) the altruist urge to actually reach good decisions in committees, simply for the good of the academy. This is an entirely different approach. It's like fishing. The problem is that everyone has only so much political capital to spend. Even if you have wonderful ideas, no one wants to admit that their own ideas are ugly misshapen things, so there is a kind of baby worship that happens when someone produces a newborn meme, no matter how defective and ill-begotten. At least, this can happen. If you're lucky there will be a ruthless meme-killer in the group who wields an idea-axe to put these unfortunate ideas out of their misery. That's a political minefield, so better have tenure first, and plan to be labeled a "character" for the rest of your career.

The fishing idea works like this: first, admit that other people have ideas as good as yours, and dedicate yourself to finding the best ones overall--not by wielding the axe, but by subtle twitches of the fishing lure. Help turn the discussion toward the ideas you see as best, and toss out a distraction if a real clunker comes along. A bright shiny object is as effective as an axe.
"I think we should eliminate faculty parking altogether."
"Hey--did you hear the merit bonuses are out?"
Check your ego. If you have a really great idea, wait to see if someone else comes up with it too. You can nudge your lure a bit toward the idea and hope someone grabs it. Then speak out in support of it strongly. This gives the little meme the best chance at life. For example, imagine that you like the idea of instituting an Assessment Day in the academic calendar to allow program assessments and surveys to be administered all at once. You might dangle your lure thus:
You: One of the problems is finding the time to get all these assessments done in an organized way. We don't want to duplicate students--ask them to take the same survey twice. They hate that.
Prof Eks: Maybe an assembly of students on Saturday?

You: They'd never show up. It would cost us a fortune in iPods to motivate them. I wish we could take class time to do it--you know, all at the same time period.

Prof Eks: That would never fly--we'd need to sacrifice a whole day from the calendar.

You: That's true. We'd need a whole day, probably. Hmmmm.

Prof Why: What if we added a day? Call it Assessment Day?

You: What a great idea! I've heard of this being done a Harvard and Yale. It would solve all our problems! Super!
You get the idea. You might want to hone your acting skills. A less honest way to engineer memes, which I don't recommend, is to attribute your own ideas to other people as a way of flattery. "You remember that idea you had about an Assessment Day? I really think that's a winner." If it really is their idea, that's fine, of course. But if you're inserting it into their memespace, tagged with their own ownership, you might get a funny look. It happens, though--now you're perhaps inoculated against such attacks. By means of this meme. Enjoy.

Update: Note that during the "fishing expedition" it often happens that someone advances an idea that's better than yours. At that point, it obvious what to do: support the better idea.

Wednesday, November 11, 2009

Assessing Happiness

In The Chronicle of Higher Ed, an article caught my eye entitled "What's an M.B.A. Worth in Terms of Happiness?" (Subscription required: The Chronicle hasn't caught up to the times yet). The premise:
Any sensible person would rather be happy than rich, although many people often confuse the two—business students among them. Those who choose to attend business school on the assumption that an M.B.A. will help them change jobs, make more money, and therefore be happier are very likely misinformed.
The author claims that there is a game theory problem--a Zog's Lemma instance, if you will--in business schools. I'll let author Robert A. Prentice explain:

Unfortunately, M.B.A. programs are currently ranked—by U.S. News, BusinessWeek, and other unduly influential publications—using criteria that prominently include starting salaries for graduates and salary differentials pre- and post-business school. Rankings have such an important impact on M.B.A. programs in their intense competition for students, faculty members, and resources that it is unsurprising that the schools often try to game them—say, by admitting students not because they are the strongest applicants, but because they are interested in finance and consulting, which have historically been the highest-salaried jobs.

An implication made in the article is that this undue emphasis on money comes at the expense of ethical behavior, which is linked to happiness:
Other studies indicate that people who act ethically tend to be happier than those who do not, suggesting that we are evolutionarily designed to derive pleasure from receiving the approval of others and from doing the "right" thing. Brain scans indicate that when we act consistently with social norms, primary reward centers in the brain are again activated.
Notice the evolutionary psychology argument.

Can you realistically assess happiness? The only book I've read on the topic is Dan Gilbert's "Stumbling on Happiness," which I recommend. It's not a self-helpy book, but a description of the research that has been done in the field of happology. Dr. Gilbert's website has links to other projects he's engaged in, including a link to an application to track your own happiness on your iPhone, which is a brilliant idea. You could do the same thing on Twitter, methings: one day a week send out tweets like #happymeter 5, for 5/10. Want to join me? Here's the link to results.

Maybe I think the idea is brilliant because I created a similar application a few years ago. Our student dropbox (a mini-portal) became wildly popular, and as a little research project I added a star rating system that looks like this:

There were no instructions, just the raters. On the login page, I then had the database calculate averages and display them. Here are the current ones.

From the beginning the trend has been school > life > world. There are about two years of data now, except that I did something stupid during the election and changed the rater to keep a running poll for a while. The graph shows a semester's worth of data, looking at change data week over week.
Self-reported happiness declined more in males than females as the semester wore on for that group of students who continually reported their ratings. This may be connected to higher attrition rates in males, which we also observed. I didn't have much luck in correlating the two, however. Other results showed that older students were significantly happier than younger students. The report is two years old now, but you can look at the whole thing here. After I read Dan Gilbert's book, I emailed him about this research. He replied with something like "Thanks for conditioning your students to take surveys!"

So what professions are the happiest? I found this relatively current (2007) report from the University of Chicago: "Job Satisfaction in the United States."

It's nice to see that education administrators and teachers are both on there. Who'd have guessed? The unhappiest workers of all turned out to be roofers. You can see the whole list and other observations in the article itself. Job satisfaction isn't exactly the same thing as happiness, but perhaps it's close.

So does money buy you happiness? Aside from the inner glow of helping old ladies with their 401ks, isn't there a rewarding feeling that comes from shopping for a new yacht? The (2001) study "Does Money Buy Happiness? A Longitudinal Study Using Data on Windfalls" tries to answer that question by looking at people who won the lottery or received an inheritance. From the abstract:
A windfall of 50,000 pounds (approximately 75,000 US dollars) is associated with a rise in wellbeing of between 0.1 and 0.3 standard deviations. Approximately one million pounds (1.5 million dollars), therefore, would be needed to move someone from close to the bottom of a happiness frequency distribution to close to the top. Whether these happiness gains wear off over time remains an open question.
My guess is that getting the money "for free" isn't as rewarding as say, scraping it off the top of predatory loans, but judge for yourself. I'd love to participate in the next study, regardless. Where do I sign up for the winning lotto numbers?

Update: See Reuben Ternes' comment below about Gross National Happiness. I also fixed a typo.

Tuesday, November 10, 2009

Numbers and Names

Words have meaning. This is true even if the words are not formally defined; people could talk to each other before dictionaries came around. The facility to speak and understand is so fluid in fully-functioning humans that we underestimate how difficult it is (see Moravec's Paradox). Undoubtedly there was strong evolutionary bias toward creating this ease of communication, as opposed to making our brains facile with long division, for example.

Because words are powerful, they get hijacked. There is economic value attached to the effect of certain words, like "new and improved" and so they get put into use like blue-collar workers marching off to punch in. Sometimes this is manipulative or cynical, as brilliantly illustrated by Orwell in 1984. In the Russian revolution, Bolsheviks were pitted against Mensheviks, names stemming from a narrow vote. The former word means "majority" and the latter "minority." Imagine if your political faction is saddled with with the second name...

In outcomes assessment, or reporting out psychometrics in general, the use of common words is sloppily introduced. I've addressed the big one: "measurement" elsewhere. Another good source for this kind of error propagation is studies that use factor analysis. I came across a good example while reading "A Look across Four Years at the Disposition toward Critical Thinking Among Undergraduate Students" by Giancarlo and Facione while browsing Insight Assessment's research page. This company produces the Critical Thinking Dispositions survey I blogged about here. I don't mean to be critical of the authors, but rather highlight a practice that seems to be endorsed by most who write about such things. The study itself is interesting, giving a before-and-after look at undergraduates as assessed by the survey. They introduce the topic of dispositions thus:
Any conceptualization of critical thinking that focuses exclusively on cognitive skills is incomplete. A more comprehensive view of CT must include the acknowledgement of a characterological component, often referred to as a disposition, to describe a person’s inclination to use critical thinking when faced with problems to solve, ideas to evaluate, or decisions to make. Attitudes, values, and inclinations are dimensions of personality that influence human behavior.
Notice the implication that personality comes in dimensions. Dimensions are by definition independent of one another, and as we shall see, the idea is that we can assemble a linear combination of these pieces to assemble a whole disposition. This is an enthymeme without which the rest of the analysis cannot proceed, but it's a big leap of faith. As such, it ought (in the research community) to be spelled out explicitly. The mindset that attitudes and values and inclinations together create some kind of vector space is so wild that you'd think caution would be advised. If the implication is that these dimensions really are orthogonal (completely independent of one another), it's ridiculous on the face of it. What does it mean to have a very small amount of "attitude" but lots of "inclinations?"

Most things are not linear. If I'm talking softly, you may only hear bits and pieces of what I hear. Increasing the volume will enable you to hear me clearly within a range, but we wouldn't be so bold as to say "talking twice as loud makes you understand me twice as well." We use linearity not because things are linear but because it makes it easy to do the analysis. In small ranges, it often makes sense to approximate non-linear phenomena with linear models, but one has to be careful about reaching conclusions.

In the article, the assumption is that the disposition to think critically is the linear combination of a few component dimensions. These are listed:
Factor analysis of the CCTDI reveals seven distinct elements. In their positive manifestation, these seven bipolar characterological attributes are named truthseeking, open-mindedness, analyticity, systematicity, critical thinking (CT) self-confidence, inquisitiveness, and maturity of judgment.
Notice the passive voice "are named." Are named by whom? Here's the process: The survey is administered and the results recorded in a matrix by student and item. A correlation matrix is computed to see what goes with what. Then a factor analysis (or singular value decomposition, in math terms) is performed, which factors the matrix into orthogonal dimensions. To understand this, it helps to look at an animation of a simple case. If the dimensions have different "sizes" (axes of the ellipse in the animation), then a more-or-less unique factorization results. If the dimensions are close to the same size, it's hard to make that case. Each dimension is defined by survey items and associated coefficients. It supposedly tells us something about the structure of the results. Note that orthogonal means the same thing it did earlier: completely independent. You can have zero of one factor and lots of another, and this needs to make sense in your interpretation.

So: we do some number crunching and find associations between items. These are collected together and named. We could call them vector1, vector2, and so on, but that wouldn't be very impressive. So we call them "openmindedness", "attentiveness," and use words that already have meanings.

It's not even clear what the claim actually is. Is it that we humans perceive critical thinking dispositions as a linear combination of some fundamental types of observation, presumably presented to us in whole form by our perceptive apparatus? Or is it that in reality, our brains are wired in such a way that dispositions are generated in as linear combinations?

It would be relatively easy to test the first case using analysis of language, like the brilliant techniques I wrote about in "High Five." I don't see any evidence that this sort of thing is done routinely. Instead, researchers eyeball the items that are associated with the dimensions that pop out and give them imaginative names. They may or may not be the same names that you and I would give them, and may or may not correspond to actual descriptions that someone on the street would use to describe the test subject.

I hope you can see the sleight of hand by now. In the case of this particular article, the authors go one step further, by describing in detail--in plain English--what the dimensions are (I have bolded what was underlining in the original):
The Truthseeking scale on the CCTDI measures intellectual honesty, the courageous desire for best knowledge in any situation, the inclination to ask challenging questions and to follow the reasons and evidence wherever they lead. Openmindedness measures tolerance for new ideas and divergent views. Analyticity measures alertness to potential difficulties and being alert to the need to intervene by the use of reason and evidence to solve problems. Systematicity measures the inclination to be organized, focused, diligent, and persevering in inquiry. Critical Thinking Self-Confidence measures trust in one’s own reasoning and in one’s ability to guide others to make reasoned decisions. Inquisitiveness measures intellectual curiosity and the intention to learn things even if their immediate application is not apparent. Maturity of Judgment measures judiciousness, which inclines one to see the complexity in problems and to desire prudent and timely decision making, even in uncertain conditions (Facione, et al., 1995).
These descriptions would serve suitably for ordinary definitions of ordinary terms (without the use of "measurement"), but no evidence is presented that the ordinary meanings of all these words corresponds in any way to the factor analysis results, other than that someone decided to give the dimensions these names. The final touch is claiming that we "measure" these elements of personality with precision:
For each of the seven scales a person’s score on the CCTDI may range from a minimum of 10 points to a maximum of 60 points. Scores are interpreted utilizing the following guidelines. A score of 40 points or higher indicates a positive inclination or affirmation of the characteristic; a score of 30 or less indicates opposition, disinclination or hostility toward that same characteristic. A score in the range of 31-39 points indicates ambiguity or ambivalence toward the characteristic.
All of this strikes me as absurd. It's not that surveys can't be useful. To the contrary, they undoubtedly can give us some insights about student habits of mind. But to suppose that we can slice and dice said behaviors with this precision is far over-reaching, particularly in the use of ordinary language to create credibility without proof that these associations are strong enough to withstand challenge.

This practice is unfortunately common. The NSSE reports include dimensions like this, for example.

Sunday, November 08, 2009

Other Reading

Here are some noteworthy articles that speak for themselves:

Strategic enrollment planning:

From Noel-Levitz (papers and reports):
On assessment:
Educational Technology

Pricing Higher Ed

My last post included a link to "Admission, Tuition, and Financial Aid Policies in the Market
for Higher Education
" by Epple, Romano, and Sieg from 2003. In the paper, they test economic models against actual data and reach some very interesting conclusions about how pricing works. One of the assumptions is "In our model, colleges seek to maximize the quality of the educational
experience provided to their students."

I thought about this for a while. It's not obviously true, is it? I'm trying to remember how many meetings I've sat in where someone talked about the quality of educational experience. Of course, in many small ways programs, individual instructors, chairs, and so on do bits and pieces that impact this quality. And the SACS Quality Enhancement Plan is supposed to turn this into a visible project.

But by and large, I think most of my meeting time has been spent on solving problems, grinding away at the routine bureaucracy, or (once in a while) trying to make the bureaucracy work better. Of course, outcomes assessment is supposed to lead to continual improvements in the quality of education, but it would be a wonderful thing if board meetings were opened with the sentiment: we're here to improve the quality of educational experience.

As it turns out, I'm in the middle of a project to improve the "experience" part of that by helping organize strategic planning action items along those lines, and I'm going to start using that language.

In the article, the authors give some dependencies for quality:
  1. peer ability of the student body
  2. a measure of peer-student income diversity
  3. instructional expenditures per student
Quality is relative, and two of the dependencies listed above are intuitive: students don't want to attend classes populated with students who are all less able than themselves. They also perceive the institution's ability to spend money in the classroom. This one is reflected in college rankings too (see "Zza's Best Liberal Arts Schools"), which probably has some affect on decisions. The second dependency, however, is surprising to me.

They see a distinct stratification that bestows economic benefits to the top schools:
Colleges at low and medium quality level have close substitutes in equilibrium and thus a limited amount of market power. Admission policies are largely driven by the “effective marginal costs” of educating students of differing abilities and incomes.

Colleges with high quality have more market power. These colleges do not face competition from higher-quality colleges. Hence, they can set tuitions above effective marginal costs and generate additional revenues that are used to enhance quality.
This suggests a Darwinian struggle for schools at the low and mid-levels of means and quality. In a catch-22, they lack the pricing power to enhance their position much. But once breaking through a ceiling, it becomes easier. At least that's my interpretation.

On the subject of price, the authors illuminate the second dependency (financial diversity):
We also find that colleges at all levels link tuition to student (household) income. Some of this pricing derives from the market power of each college. This allows colleges to extract additional revenues from students that are inframarginal consumers of a college. However, as noted above, our empirical findings suggest that market power of lower and middle ranked colleges is limited. This suggests that pricing by income may be driven by other causes.
I found an explanation of what an "inframarginal consumer" in another source "The inframarginal consumer is willing to pay more for the good than is the marginal consumer." So, if your college has a good market position, you can charge a premium. But the authors argue that that this isn't the whole story:
In this paper, we then also explore the role that income diversity measures play in determining college quality. Our findings here indicate that colleges and students believe that the quality of a student’s educational experience is enhanced by interacting with peers from diverse socioeconomic backgrounds.
Obviously there are many reasons for wanting a diverse student body, but the authors propose to actually use that as a factor that contributes to the price model. This begins to make more sense in Section 6 of the paper, where they verify empirically that college quality increases with income diversity, stating that "To attract students from lower-income backgrounds, colleges give financial aid that is inversely related to income as detailed below." While this is no doubt true for some institutions, others have a more directly self-interested reason for giving need-based aid: to increase enrollment in those students who couldn't otherwise afford to attend. I talked about the revenue-generating effect of this "gap filling" in "The Power of Discriminant Pricing."

Also in Section 6, they make an observation about college size:
Absent scale economies, peer effects and endowments create a force for colleges to reduce size to increase student quality–in the limit maximizing quality by admitting a handful of brilliant students and lavishing the entire endowment on educating those students. The countervailing effect of scale economies is captured in our cost function primarily by the c3 term in the cost function.
This outlines a good strategy for an elite school: keep it small because it's easier to maintain a high level of average student quality, but not so small that the economies of scale drive up costs unreasonably.

A hundred points of SAT is worth between $4688 and $10363 in merit aid (in 2003), according to the model output. The difference depends on what tier of college the applicant applies to.

Conclusions: First, remember I'm not an economist. But the paper is clearly written, and you can skip the mathy bits easily enough. The model presented has errors, as the authors describe, but the approach seems to lead to some insights, like the relationship between size and quality, the effect of financial diversity on institutional quality, and price sensitivity by student ability and income. I have not delved into all of these in my notes above. I don't know how hard it would be to simulate their model numerically to actually use it to build your policies (e.g. by running scenarios), but it's probably worth showing it to your IR office. And if you have an economics department handy, maybe they can shed some light as well.

Saturday, November 07, 2009

Price Elasticity

Soon enough, boards and presidents, committees and task forces, will take up the question of setting tuition for next year. The discussion must vary considerably from institution to institution, but for tuition-driven privates, it's a nail-biting exercise.

The unthinking version goes like this:
President: Well, we have all the budget requests now. How short are we?
Finance VP: We're a million short, after trimming.
President: How much do we have to raise tuition in order to close the gap?
Finance VP: (calculating) About seven percent.
President: Great. That settles that. Next on the agenda is the parking problem.
The list of reasons why this won't work is long. First, one must of course account for additional financial aid awarded when the tuition goes up: probably in the neighborhood of 40% of the gain. But it could be worse than that. It might happen that net revenue actually decreases when one raises tuition. This is related to price elasticity or price sensitivity: how does the demand for education at your fine institution vary with cost?

I have started a survey of ideas out there for approaching this problem, and this post will provide links to some articles. Down the road, I'll try to give more analysis and detail. For several of the articles, you need access through your library to get them.

University Business takes the question head-on with "Research Tools to Guide Tuition and Financial Aid Decisions," from 2007 but still quite applicable. They describe a tuition pricing study:
A tuition pricing study involves a blind survey of prospective students and their parents. Since response rates are better, and the sample can be controlled more carefully, most studies are conducted via telephone.
Here's an old (1995) article called "Tuition Elasticity of the Demand for Higher Education among Current Students" in Journal of Higher Education. An even older (1987) article in the same journal is "Student Price Response in Higher Education."

A 1997 case study can be found in "Some new evidence of the character of competition among higher education institutions" in Economics of Education Review.

There are a lot of old papers you can find on google scholar, but not many recent ones. Here's a magazine article, again from University Business (2003) "Overcoming price sensitivity ... Means marketing affordability, and it's what every IHE needs to do.(On The Money)" that makes an interesting charge:
Unfortunately, the financial aid award letter itself, although a critical component of communicating affordability, comes too late in the process to influence anything but yield on admitted students--significant, certainty, but in many instances, not sufficient.
This is an interesting practical problem, and the article poses some great solutions. Forward this one to your FA director today. Seriously.

If you like economics, you may find this one palatable: "Admission, Tuition, and Financial Aid Policies in the Market for Higher Education," in Econometria (2006). Their models shows "that the model gives rise to a strict hierarchy of colleges that differ by the educational quality provided to the students." Also:
Our empirical findings suggest that our model explains observed admission and tuition policies reasonably well. The findings also suggest that the market for higher education is quite competitive.
It's very dense with formulas. I'll try to read the tea leaves when I have more time.

Thursday, November 05, 2009

Habits of Mind

On Tuesday I wrote in "Learning and Intelligence" about the difference between having the ability to think rationally being different from the habit of using said ability. The source was a fascinating NewScientist article.

Yesterday in an article in InsideHigherEd, Peter Facione wrote in a comment that:
The California Critical Thinking Skills Test (CCTST), used by hundreds of colleges and universities throughout the US and worldwide, has quietly become one of the leading measures of critical thinking skills. Its companion tool, the California Critical Thinking Disposition Inventory (CCTDI), assesses the habits of mind which incline one to use critical thinking in real world problem solving and decision making. Research shows that having the skills to think well and having the disposition to use those skills in key judgment situations is not highly correlated and yet few educational settings are taking this into account.
I do not remember reading about the CCTDI before. Their website describes the instrument as:
The California Critical Thinking Disposition Inventory is the premier tool for surveying the dispositional aspects of critical thinking. The CCTDI invites respondents to indicate the extent to which they agree or disagree with 75 statements expressing beliefs, values, attitudes and intentions that relate to the reflective formation of reasoned judgments. The CCTDI measures the "willing" dimension in the expression "willing and able" to think critically. The CCTDI can be administered in 20 minutes.
You can find a list of abstracts for research using the CCTDI here. There are some interesting bits to read there, like:
Significant differences were detected in critical thinking disposition (CCTDI) between the two groups of students, Hong Kong Chinese students failing to show a positive disposition toward critical thinking on the CCTDI total mean score, while the Australian students showed a positive disposition. The study raises questions about the effects of institutional, educational, professional and cultural factors on the disposition to think critically. [Tiwari A, Avery A, Lai P. (2003)]
You may recall that the CIRP is also trying to do this using item response theory to estimate a dimension called habits of mind. It's described on the HERI website as "Interactions with students that foster habits of mind for student learning."

It seems to me that this an opportunity to change the discussion about general education, using these ideas. If disposition is indeed different from ability, then perhaps the marination of a student for two years in a survey courses ought to be focused on developing habits of mind more than trying to assemble a skills list (like critical thinking). Or in addition to it, if you're a glutton for punishment.

A student who completes a solid degree program is going to come out of it with real analytical and creative skills he or she didn't have before. But the way the curriculum generally work, the way our learning outcomes are sketched, I don't think it's common to address this other dimension: intentionally developing open-mindedness, truth-seeking, systematicity, and maturity, as the CCTDI gets factored.

Wednesday, November 04, 2009

404: Learning Outcomes

tl;dr Searched SACS reports for learning outcomes. Table of links, general observations, proposal to create a consortium to make public these reports.

In grad school there was a horror story that circulated about a friend of a friend of a cousin, who was a math grad student in algebra. He had created a beautiful theory with wonderful results, and was ready to submit when it was pointed out to him that his axioms were inconsistent--they contradicted one another. The punchline is that you can prove anything of the empty set. This sometimes also happens to degree programs that suddenly have to prove that they've been doing assessment loops, except in reverse: building grand theories from the empty set.

I complained the other day the there weren't many completed learning outcomes reports from universities to be found on the web. So when I noticed Mary Bold's post at Higher Ed Assessment "Reporting Assessment Results (Well): Pairing UNLV and OERL" I thought I'd hit paydirt. The hyperlink took me to a page at University of Nevada, Las Vegas with a link advertising "Student Learning Outcomes by College." Without futher ado, here's the page:

That's just too funny. There are, however, excerpts from actual results listed in the teacher education site, which you can find here. That site is the OERL that Mary refers to in her post.

It did make me think, however. There must be a bunch of SACS compliance certifications out there on the web now, and section 3.3.1 (used to be 3.4.1) covers learning outcomes. Want to see how your peers have handled it? The name of the school in the table below links to the compliance certification home page for that institution. For good measure I'll throw in 3.5.1, general education assessment, too. You're welcome.

Institution3.3.13.5.1
Southeastern Louisiana Universitylinklink
Western Kentucky Universitylinklink
Berea College
linklink
The College of William and Mary
linklink
The University of Alabama in Huntsville
linklink
Nashville State Community College
link
link
Mitchell Community College
linklink
The University of Texas Arlington
linklink
University of New Orleans
linklink
Albany State University
linklink
Bevill State Community College
linklink
Louisiana State U. In Shreveport
linklink
Texas Tech University
linklink
Coker College
linklink

I did not try to make a complete list of all available reports. If you find a good one, send me the url and I'll add it. Here's my google search criterion.

Disclosure: I was the liaison, IE chair, webmaster, and editor for the Coker College process (as well as doing the IR and a bunch of other stuff--no wonder I have white hair). The linked documents to that one are turned off for confidentiality, but you can find the complete list of program learning outcomes plans and results here.

Observations:
First, hats off to all the institutions who make these reports public. This is a great resource to anyone else going through the process.

I only scanned through the reports, looking for evidence of learning outcomes. I probably missed a lot, so take my remarks with a grain of salt--go look for yourself and leave a comment if you find something interesting. It should go without saying that in order to be helpful, this has to be a constructive dialogue.

For learning outcomes I didn't find as much evidence-based action as I would have expected from all the emphasis that SACS puts on it. My own experience was that programs were uneven in their application of what is now 3.3.1 (at the time SACS didn't even have section numbers for the requirements--how crazy is that? I invented my own, and then they published official ones just before we had to turn the thing in.). So there was a lot of taking the empty set and trying to build something out of it. That can take various forms, which one notices in scanning certification reports:
  • Quick fixes: use a standardized instrument like MAAP, MFAT, CLA, NSSE. Of course, it's not really that quick since it would take at least a year to get results, analyze them, and use them. The conceptual problem is tying results to the curriculum (except for MFAT).

  • Use coursework: passing a certain course certifies students in X (e.g. use of technology), passing a capstone course with grade X ensures broad learning outcomes. This is fairly convincing as gate keeping, but hard to link to changes unless specific learning outcomes are assessed.

  • Rubric-like reporting. Okay, I'm not a big fan of rubrics when employed with religious zeal and utter faith in their validity. But I have to admit that the most convincing report summary I saw on learning outcomes was the one below from Mitchell Community College. Not all the data points are there, but that's realistic. Take a look.
Of course, this still has to be tied to some analysis and action to get full points, but the presentation of the learning outcomes is clear and understandable. In general, that was somewhat of a rarity in my cursory review. What there is a LOT of is plans, new plans, minutes describing the construction of new plans and goals, assessment forms, models and processes, and generally new ambitions and enthusiasms. There are standardized test reports like CLA summaries, which solve the data and report problem, but don't touch the hard part: relating it to the practice of teaching in a discipline.

I believe that if our efforts as assessment leaders are to be maximally useful, we have to make the annual, messy, incomplete, inconsistent, but authentic program-level plans and results available to the public. This would encourage us to adopt some kind of uniformity in reporting, and improve the quality of the presentation (maybe I'm a fool for saying that). The only downside is that if we're honest, there will be empty sets here and there--programs that have not been dragged into the 21st century yet. But transparency can help there too, perhaps into shaming some into compliance. Just imagine (really dreaming now) if the quality of the reports were good enough to use for recruiting and paint across the program web page.

The Voluntary System of Accountability tries to do something like that. Unfortunately, that group seems to be enamored of standardized tests for learning outcomes. There's a validity study they just published here that you can consider. This post isn't the place to go into all the reasons I think standardized testing is the wrong approach, so let me just leave it at that.

Thinking more positively: is there any interest out there to form a loose consortium of schools that report out annual learning outcomes for programs? The role of the consortium could be to settle on some standard ways of reporting and defining best practices?