Monday, November 30, 2009

Research and Ed Blogs

I haven't been blogging here much because I've been spending time on a research project. It requires a lot of programming and machine-time to actually run the programs. I have it down to a science now, so at this point it's mostly tweaking parameters to see what happens. Basically, I created an artificial life workbench to test out some ideas about evolution and survival in a computational context. You can read background here if you're interested. The "artificial life" consists of little computer programs that have to solve certain tasks to survive. They reproduce and mutate if successful enough. The special programming language is very concise, and the critters look like >-]- and +#}O{]<>}. I chose the symbols for their (mostly) bilateral symmetry, so if you squint they actually do look like weird life forms. The first one, when run in the environment does this:
  • > Look at the old environment (looks like feelers, no?)
  • - Subtract one from it
  • ] Skip to the end unless the result is zero
  • - Subtract one more if it was a zero
  • Output the result
The environment is only ever 0,1, or 2, so the critter maps 0 or 1 to 9 (zero minus one is nine) and 2 to 1. This is exactly a recipe for survival. It took 109 generations to evolve, and only happened once in a 100 trials.

In other news I came across a nice list of education blogs and news sites. I haven't gone through them all yet, but many I haven't seen before. They are not all higher education. The list is here.

I also discovered Crossing the Finish Line, described in one review on the Amazon page as "The most comprehensive look yet possible at the determinants of graduation rates--and what might be done to improve them." It looks fascinating, but I've only been able to take a peek so far.

Also of note: NPR's piece "Who Needs College, and Who Shouldn't Go?"

Friday, November 20, 2009

Increasing Transfers

A Chronicle article "Report Highlights Characteristics of Colleges With High Transfer-Success Rates" cites a new publication from The Pell Institute about successful transfer programs. The new report hasn't shown up on that organization's publications page yet, but there are some interesting tidbits in the review:
The study found that early exposure is critical to ensuring a successful transition to college, especially for students who are from low-income families or are the first in their families to go to college. Such students are likely to be unfamiliar with higher education and what it will take to earn a bachelor's degree.
This echoes what I found in a retention study--first generation students didn't really understand the product they were buying, and quickly became disenchanted once they arrived (here).

The article lists some elements of successful two-year programs that built bridges to four year programs:
  • specialized advising
  • flexible scheduling of academic and support services
  • first-year seminars that include strategies on note taking, test taking, and navigating campus services
  • one-stop shops, where services such as registration and financial aid are placed together in one central location
  • replacing a tall customer-service counter with desks to make the interaction between students and staff members more accessible and personal
  • offering clubs and organizations
  • setting aside an hour each day when no classes were scheduled to further encourage participation
  • employing faculty and staff members of similar backgrounds to their students
I imagine that many of these elements would work from the other perspective too: implemented for students at four-year institutions who seek to increase transfers from two-year institutions.

Thursday, November 19, 2009

Communication Good, Communication Bad

Two articles in the Chronicle at the top of the most viewed list are interesting to contrast. First is "Turnaround President Makes the Most of His College’s Small Size" about G. T. Smith as President of David & Elkins College:
[I]n higher education, Mr. Smith is known as a turnaround artist, a man with the talent and disposition to take a failing college and transform it into a winner.
Quoting Smith, they give the reason for his success as "The underlying thing for me is relationships—hardly anything important happens that doesn't have to do with relationships." For example:
After years of stagnant enrollment at Davis & Elkins—which had developed a dismal local reputation, according to some local high-school counselors—the freshman class was up 50 percent this fall. As of November, the number of applications was more than seven times higher than at this time in 2007, and eight students had already put down deposits. Consider that those numbers came after the college had canceled its advertising campaign and done away with mass mailings in favor of a highly personal approach to recruiting students: getting to know their names, their parents' names, their dogs' names, and conveying the message that at this college of 700 students, you're part of a family.
A bit off-topic, but equally fascinating is his take on the changing landscape of higher education, which I've speculated wildly about in these virtual pages. Pres. Smith's version:
"We can't in a Pollyannaish way say, 'The liberal-arts college will always survive.' We are all under threat or under siege," he says. "It comes down to whether you are going to look at your future based entirely upon your past or what others are doing, or whether you are going to look at the fundamentals, the principles, the basics, and have the discipline to stay with those."
That was "Communication Good." Now the other one.

A while back I blogged happily about the backchannel--the emergence of live chattering by text, Twitter, blogging, etc. during presentations, class meetings, or any occasion where people have the means and time. It's the 21st century version of passing notes and whispering.

Today's references are:
You can gather from the titles what happened. In the case of the last article on the list, a presenter at Web 2.0 Expo was giving her talk with a huge screen in the background showing the conference Twitter channel feed. When some tweeters started making disparaging comments on the backchannel, she didn't understand why the audience was reacting oddly to her talk, and got understandably flustered. The other articles talk about other instances of this "tweckling," a neologism I find kind of cute, even if the behavior is reprehensible.

Interestingly, browsing Technorati for "backchannel" only produces positive-sounding hits, with advice for speakers on how to build one, for example.

The moral of the story? It would be ridiculous to try to generalize anything based on these two completely different articles about communication, so I can't resist. The difference is between high-bandwidth, high-stakes (personal responsibility for words) communication and low-bandwidth, low-stakes (essentially anonymous) communication. It's not that the first is good and the second bad, just that they're very different. And the no-man's land in between--email--is the worst of both worlds: high stakes and low bandwidth. Ever had a joke or off-hand comment interpreted differently than you intended it? Add to that the ease of someone sending your email to a third party, and the potential for mischief multiplies.

One of the articles notes that some conferences post rules of etiquette for the backchannel. Of course, if the tweeters are anonymous, this isn't going to have much effect. In the end, though, there isn't true anonymity on the Internet. Pretty much anything can be traced if there is enough interest in doing so. Best policy, in my opinion, is: if you think it's important to say it, put your name on it, be clear, and be polite. I probably fail on the second point, but it's a work in progress. The best payoff would be to get almost as much mileage out of low-bandwidth communication as Pres. Smith gets out of high-bandwidth communication. That's a powerful idea that could spawn a sea of consultants: maximize the effect of short, text-based, remote communication.

Of course, this is the Internet; someone's probably wrote a book on it already.

Wednesday, November 18, 2009

Grading and Assessment

The topic of how grades and assessments could be aligned has been mentioned here before (see "Assessing away Grades"). Nils Peterson pointed me to an active discussion on the topic on HASTAC called "Grading 2.0: Evaluation in the Digital Age." There are some good links there, topic questions, and several comments. One of the discussion points asks:
3. Can everything be graded?
- How important is creativity, and how do we deal with subjective concepts in an objective way, in evaluation?
Here, I think we run smack into the problem. It goes like this:
  1. Grades have economic consequences for both students and teachers.
  2. Because of this, grades have to be defensible during a challenge and review process.
  3. Because they have to be defensible, grades have to have at least the appearance of objectivity.
  4. However: the best assessments should be free from economic influence, and may be subjective (see the whole Assessing the Elephant thing).
This problem only rears its head for complex learning outcomes. If you're teaching multiplication tables, it's not a problem to create objective (maybe even valid and reliable) instruments. What about creativity, however, as posed in the question above? Can we really slice up that concept into "dimensions" and rubrics that capture the essence of what creative genius produces?

It's instructive to see how grades are assigned in the fine arts and performance arts. There simply is a lot of subjectivity. I'm generalizing from limited experience, but I think that the key is the attitudes and methods the assessor uses more than the actual assessment. For example, if I say "your work is all derivative and boring," it's very different from saying "to my taste, this doesn't excite me." The former sounds like an objective statement, and the later is clearly subjective. Students aren't stupid, and they know that there's a difference: dressing up subjectivity as objectivity only irritates them. What I've seen from successful art-type assessments is that the effort put into the work counts for a great deal. Creativity is a kind of exploration, perhaps, requiring trial and error, and therefore time invested. Art profs want to see portfolios, sketchbooks, incomplete works, anything that shows that the student is engaged. And the judgment of how much work one has done can be fairly objective; it's something you can talk to the student about and reach agreement on. Of course, the quality of the engagement counts as well, but to some extent I think that comes out also, if one can review all the cars in the whole train of thought. This is certainly true of teaching math at the upper levels. It's a thrill when a student comes to you with "I thought of this problem and tried to solve it. Will you look at it?" Whereupon you're presented with scruffy bits of paper (mathematicians will write on anything) with formulas all over them. Most of them are wrong--false starts. It's like what one of my art colleagues described looking at sketchbooks is like: it's raw and unprocessed, and more powerful than a finished work.

So it may be that there is a natural division between objective and subjective assessments and grades. The former are relatively easy. But maybe for more complex outcomes we need an approach more like that of art: look at not just a finished product on a test or paper, but demand to see the corpus of work, mistakes and all, that led to it. Technology can obviously help with this because information is cheap to store "forever." Portfolio systems as they are generally currently conceived are not really the right tool for this--what you'd want is a virtual space for storing documents and imposing a bit of structure on them. Perhaps a mind-map hyperlinked to documents and meta-data tags on the whole thing, so it can be sorted and presented by different facets. Add the ability for an instructor to freely annotate these nodes and artifacts with hidable notes, and it starts to sound attractive. At any rate, this is not the kind of problem that another scoopful of rubrics can solve.

Saturday, November 14, 2009

Planning Resources

As we get into the season of setting tuition, finalizing aid policies, tweeking strategic plans, and prognosticating about enrollment, it's handy to have a reference library.

Some of these links have been cited before, but I wanted to put them all in one place. I will likely add to them without bothering to add "update" to the post, to make this a reference page of sorts. If you have a good one, please forward it.

Strategic planning:
Retention/Success:
Recruitment:
Pricing:
Financial Aid:
Web Strategy:

Friday, November 13, 2009

Quote, Representation

Reviews of an institution's general education (or liberal arts requirements) might be characterized as nasty, brutish, and interminable, to paraphrase Hobbes. You can get a taste of it in "Progress, But No Votes, For Review" in the Harvard Crimson, about that institution's journey to re-enlightenment. And then there's a fascinating set of minutes from "WCU" here that documents the tortured trip:
The creation of a dynamic general education program that the faculty will "buy into" can perhaps change the University culture in a way that talking cannot. So, how are the faculty enticed to "buy into" a program? It is necessary to ignore the noisy 15-20% who fight any change in the status quo. [3/21/97 Minutes]
Also found in this rich set of minutes is a bibliography of recommended sources for those foolish enough to buy a ticket on this train:
The committee will look first at AAC's New Vitality in General Education, the first half of Weingartner's Undergraduate education: Goals and Means, AAC's Strong Foundations: Twelve Principles for Effective General Education Programs, and Gaff's New Life for the College Curriculum.
This was, of course, before the AAC&U released its LEAP initiative.

There is a virtual thicket of documentation to be found via google search, a salad of despair, Thomas Pynchon might say (see "Meeting Salad" for the quote).

My particular interest this morning was the composition of gen ed review committees, but it's easy to get lost surfing the epics struggles documented online--surely enough material for several posts. Heck, I should invite applications for a gen ed review "sub blog."

The practice I'm accustomed to is to select a representative from each large academic unit (a department or division, for example) that has content usually found in general education. This would include sciences, humanities, social sciences, and PE. Our accreditor, SACS, actually has a list of minimum requirements for a gen ed program.

Here's an example from "MU":
The General Education Review Committee is composed of two faculty representatives from each of the four academic units elected by Faculty Senate for overlapping terms of two years, one representative from non-school faculty elected by Senate for two years, and two student representatives elected by Student Senate for overlapping terms of two years. In addition, a chairperson is elected by Faculty Senate from the Faculty Senate membership to serve a three-year term.
There are also non-voting members with administrative titles.

Both the AAC&U and Academic Commons advertise an interest in the liberal arts. The former has a book on the subject of gen ed review, but you have to buy it to see it. The site whatwilltheylearn.com compares general education curricula to see if they get the basics, as defined by a conservative list of subjects.

The issue that I take up in the title of this post is the question of representation. If we assume that the practice of creating a review team in the way described above is common, then the result is a discriminatory process that almost guarantees political turf battles.

Think about the arrogance in the assumption "we'll pick a math guy to represent all you math people, and a history person to represent all of them, and so forth." This is roughly the same as picking a Cubs fan to represent all Cubs fans on how a stadium should be build or picking someone who likes hot dogs to represent all hot dog eaters on how to choose a new menu.

There are many, many design questions for a general education review team to consider, and only a small part of it concerns the number and type of courses to be chosen. With the usual discriminatory practice, the only basis for "representation" is to lobby for one's own area: let's pack in as many math courses as we can, so we can get more faculty slots. Unless the committee can rise above that (and take the heat from their respective departments), the outcome is almost guaranteed to be bland, and the dynamics encourage unhealthy inter-departmental politics. Of course, there's no way to avoid politics. Why not make that a virtue?

In a representative democracy, we don't go out an pick one Toyota lover to go to congress to represent all Toyota lovers. That kind of blatant discrimination would never be tolerated (okay, gerrymandering aside--but that's ugly too). What if we modeled the selection of a gen ed committee on an actual representative democracy? Nominees for the committee slots would be put up for a vote. They would be expected to set out their views on general education, probably in writing, but perhaps in a debate with other candidates too. It could be a vibrant, rich affair that properly celebrates and illuminates the process as it begins. It would also illustrate the use of liberal ideas that were hard fought in the enlightenment instead of forcing an illogical form onto a titanic exercise in critical thinking. The irony of the usual process is eye-popping.

Just like in a representative democracy, the election of the representatives does not by itself create the design. The hard work of design and compromise, and yes, departmental politics, still has to be done.

Of course, this may be too Utopian. Perhaps what would happen is that the initial gamesmanship over the election of representatives would center solely on discipline loadings for the final product. Maybe the big departments would just vote themselves candidates that would beef up their areas. If so, that would be sad, but at least it would reveal the true motivations of the academy.

Update: I just noticed that InsideHigherEd has an article about the "jeopardy" liberal arts is in, citing a need for proof of relevance. They note that some institutions have presentations that resemble the kind of political debate I described in my Utopia:
The University of Alaska at Anchorage has started a lecture series where professors from different liberal arts disciplines give talks aimed at attracting a popular audience. Utah State University has invited successful alumni back to talk about how their liberal education shaped their careers.
It seems that having more open processes for constructing (via faculty procedures) and validating (with authentic assessment) general education means and results, potential students will have a better chance of seeing the point. In a retention study I did (see "The Value of Retrospection"), we ultimately discovered that many of our students simply didn't understand the product they were buying.

Thursday, November 12, 2009

The Memescape

In The Selfish Gene, Richard Dawkins popularized the notion of a 'meme,' which compliments the biological 'gene.' When he took a question about the subject at a talk in Columbia, SC recently, he seemed almost rueful, saying something like "Oh, no. Not the meme question." But there's no turning back now, as the google trends chart shows:

A search on Amazon.com revealed about eighteen books about memes, and there are currently 49 million hits on google, including the wiki page, a Daily Meme site, and Know Your Meme that claims to document Internet phenomena (one might say ephemera). A whole "science of memes" called memetics has sprung up, claiming 487,000 web hits currently.

So what's a meme? The wikipedia definition is just fine:
A meme (pronounced /ˈmiːm/, rhyming with "cream"[1]) is a postulated unit of cultural ideas, symbols or practices, which can be transmitted from one mind to another through speech, gestures, rituals or other imitable phenomena
Remember pet rocks, anyone? Meme. Tickle-me Elmo. Another meme. Obviously, "meme" itself is a meme, and a very successful one. If we think of any medium that will permit complex organizations within it, it seems to find fixed points (self-replicators) on its own. We might call this the cosmic version of Murphy's Law: if a system has fixed-point expressions, they emerge naturally from noise. Feed some noise into a microphone and point it at the speaker. You don't get amplified noise, you get a particular resonant frequency that is tuned to the physical characteristics of the system. This is a powerful idea because you can work it backwards too: look at what emerges naturally from a medium and then figure out what the characteristics of the medium must be.

Ever get a chain letter? That's a particularly interesting one because it's easy to dissect. This idea is described by Dawkins, but I'll paraphrase. A good way to get an idea replicator going is to combine these elements:
  1. An imperative to replicate the idea. This is the basic "reproduction drive." E.g. Send this to ten of your friends.
  2. A carrot. If you perform #1, something good is likely to happen to you. E.g. Cite stories of people who had amazing fortune after forwarding the email to ten of their friends.
  3. A stick. If you fail to perform #1, something bad will happen to you. E.g. This one dude forgot to send the email and a brick fell on him.
Of course, the more fake documentation (a REAL LAWYER friend said "blah blah blah"), and other emotional mumbo-jumbo you can ladle onto the thing, the better chance it has. In order to survive, it needs to multiply its chances geometrically (at least, I think I can prove that if you give me an hour), and hence the replication imperative is structured to multiply senders. Dawkins notes that the elements above are found in most religions.

This is a mind-broadening idea, because we all carry around the burden of memes that are so deeply buried in our nous that we don't realize they are there. Some are undoubtedly hardwired by evolution so we avoid starvation, for example. Not everything you believe can be logical. This is because even logic is based on assumptions (called axioms). Karl Popper ran into this problem when he philosophized about science never proving anything a theory, but rather only disproving theories that didn't work. By his own logic, he couldn't prove that. You have to start somewhere. It's a nice exercise to dig around in your beliefs and see what's there. Could be a great self-assessment assignment for a course that contains noncognitives too. Which leads to next topic.

Memes and the Academy. What does this have to do with teaching, learning, and assessing? Obviously teaching has to do with the transmission of ideas. I think we miss a trick by not trying more actively to transmit habits of mind as well. After all, part of being open-minded is the meta-cognitive ability to examine ones own beliefs. We could do that more deliberately in our curricula.

There apparently are some scholarly activities on this topic, for example "Memes and the Teaching of English, which "Examines why some sayings and catchphrases stick in people's minds, while others are unrecognized and unused. Offers an answer to this question from an evolutionary standpoint."

It's much more natural to me to think of assessment on the meme level, rather than on the cognitive ability level. Individual concepts and skills can be taught. I'm not sure how to teach raw cognitive ability. The assessments of the former are more or less straightforward. Of the latter, steeped in statistical voodoo and ed-psych theory that may or may not have a basis in physical reality.

In practical terms, you can use memetics every day. Here's an example. You're sitting in a committee meeting, where the members are problem-solving. You have some ideas of your own about how the issue at hand might be resolved. What do you do?

First, there are always ideas. Lots of ideas. I'd recommend sorting through your own in your head before advancing one. Make sure it has a chance of survival. Wouldn't want to get the poor meme's hopes up only to be dashed immediately. There is value to producing good ideas--it makes you seem smart--and so your standing in the community is enhanced when you display brilliance. So you may choose to pick your best idea and propose it. This is somewhat Machiavellian because the loading is "Professor Zza has good ideas" rather than the actual idea itself. We all have to play that game early in our careers. Of course, it extends to publishing research, which is more substantive than what happens in committees, but the idea is the same. This is the reason there are lots of meaningless papers produced--the real message is: Zza is really productive.

Once past the need to establish oneself, there is (sometimes) the altruist urge to actually reach good decisions in committees, simply for the good of the academy. This is an entirely different approach. It's like fishing. The problem is that everyone has only so much political capital to spend. Even if you have wonderful ideas, no one wants to admit that their own ideas are ugly misshapen things, so there is a kind of baby worship that happens when someone produces a newborn meme, no matter how defective and ill-begotten. At least, this can happen. If you're lucky there will be a ruthless meme-killer in the group who wields an idea-axe to put these unfortunate ideas out of their misery. That's a political minefield, so better have tenure first, and plan to be labeled a "character" for the rest of your career.

The fishing idea works like this: first, admit that other people have ideas as good as yours, and dedicate yourself to finding the best ones overall--not by wielding the axe, but by subtle twitches of the fishing lure. Help turn the discussion toward the ideas you see as best, and toss out a distraction if a real clunker comes along. A bright shiny object is as effective as an axe.
"I think we should eliminate faculty parking altogether."
"Hey--did you hear the merit bonuses are out?"
Check your ego. If you have a really great idea, wait to see if someone else comes up with it too. You can nudge your lure a bit toward the idea and hope someone grabs it. Then speak out in support of it strongly. This gives the little meme the best chance at life. For example, imagine that you like the idea of instituting an Assessment Day in the academic calendar to allow program assessments and surveys to be administered all at once. You might dangle your lure thus:
You: One of the problems is finding the time to get all these assessments done in an organized way. We don't want to duplicate students--ask them to take the same survey twice. They hate that.
Prof Eks: Maybe an assembly of students on Saturday?

You: They'd never show up. It would cost us a fortune in iPods to motivate them. I wish we could take class time to do it--you know, all at the same time period.

Prof Eks: That would never fly--we'd need to sacrifice a whole day from the calendar.

You: That's true. We'd need a whole day, probably. Hmmmm.

Prof Why: What if we added a day? Call it Assessment Day?

You: What a great idea! I've heard of this being done a Harvard and Yale. It would solve all our problems! Super!
You get the idea. You might want to hone your acting skills. A less honest way to engineer memes, which I don't recommend, is to attribute your own ideas to other people as a way of flattery. "You remember that idea you had about an Assessment Day? I really think that's a winner." If it really is their idea, that's fine, of course. But if you're inserting it into their memespace, tagged with their own ownership, you might get a funny look. It happens, though--now you're perhaps inoculated against such attacks. By means of this meme. Enjoy.

Update: Note that during the "fishing expedition" it often happens that someone advances an idea that's better than yours. At that point, it obvious what to do: support the better idea.

Wednesday, November 11, 2009

Assessing Happiness

In The Chronicle of Higher Ed, an article caught my eye entitled "What's an M.B.A. Worth in Terms of Happiness?" (Subscription required: The Chronicle hasn't caught up to the times yet). The premise:
Any sensible person would rather be happy than rich, although many people often confuse the two—business students among them. Those who choose to attend business school on the assumption that an M.B.A. will help them change jobs, make more money, and therefore be happier are very likely misinformed.
The author claims that there is a game theory problem--a Zog's Lemma instance, if you will--in business schools. I'll let author Robert A. Prentice explain:

Unfortunately, M.B.A. programs are currently ranked—by U.S. News, BusinessWeek, and other unduly influential publications—using criteria that prominently include starting salaries for graduates and salary differentials pre- and post-business school. Rankings have such an important impact on M.B.A. programs in their intense competition for students, faculty members, and resources that it is unsurprising that the schools often try to game them—say, by admitting students not because they are the strongest applicants, but because they are interested in finance and consulting, which have historically been the highest-salaried jobs.

An implication made in the article is that this undue emphasis on money comes at the expense of ethical behavior, which is linked to happiness:
Other studies indicate that people who act ethically tend to be happier than those who do not, suggesting that we are evolutionarily designed to derive pleasure from receiving the approval of others and from doing the "right" thing. Brain scans indicate that when we act consistently with social norms, primary reward centers in the brain are again activated.
Notice the evolutionary psychology argument.

Can you realistically assess happiness? The only book I've read on the topic is Dan Gilbert's "Stumbling on Happiness," which I recommend. It's not a self-helpy book, but a description of the research that has been done in the field of happology. Dr. Gilbert's website has links to other projects he's engaged in, including a link to an application to track your own happiness on your iPhone, which is a brilliant idea. You could do the same thing on Twitter, methings: one day a week send out tweets like #happymeter 5, for 5/10. Want to join me? Here's the link to results.

Maybe I think the idea is brilliant because I created a similar application a few years ago. Our student dropbox (a mini-portal) became wildly popular, and as a little research project I added a star rating system that looks like this:

There were no instructions, just the raters. On the login page, I then had the database calculate averages and display them. Here are the current ones.

From the beginning the trend has been school > life > world. There are about two years of data now, except that I did something stupid during the election and changed the rater to keep a running poll for a while. The graph shows a semester's worth of data, looking at change data week over week.
Self-reported happiness declined more in males than females as the semester wore on for that group of students who continually reported their ratings. This may be connected to higher attrition rates in males, which we also observed. I didn't have much luck in correlating the two, however. Other results showed that older students were significantly happier than younger students. The report is two years old now, but you can look at the whole thing here. After I read Dan Gilbert's book, I emailed him about this research. He replied with something like "Thanks for conditioning your students to take surveys!"

So what professions are the happiest? I found this relatively current (2007) report from the University of Chicago: "Job Satisfaction in the United States."

It's nice to see that education administrators and teachers are both on there. Who'd have guessed? The unhappiest workers of all turned out to be roofers. You can see the whole list and other observations in the article itself. Job satisfaction isn't exactly the same thing as happiness, but perhaps it's close.

So does money buy you happiness? Aside from the inner glow of helping old ladies with their 401ks, isn't there a rewarding feeling that comes from shopping for a new yacht? The (2001) study "Does Money Buy Happiness? A Longitudinal Study Using Data on Windfalls" tries to answer that question by looking at people who won the lottery or received an inheritance. From the abstract:
A windfall of 50,000 pounds (approximately 75,000 US dollars) is associated with a rise in wellbeing of between 0.1 and 0.3 standard deviations. Approximately one million pounds (1.5 million dollars), therefore, would be needed to move someone from close to the bottom of a happiness frequency distribution to close to the top. Whether these happiness gains wear off over time remains an open question.
My guess is that getting the money "for free" isn't as rewarding as say, scraping it off the top of predatory loans, but judge for yourself. I'd love to participate in the next study, regardless. Where do I sign up for the winning lotto numbers?

Update: See Reuben Ternes' comment below about Gross National Happiness. I also fixed a typo.

Tuesday, November 10, 2009

Numbers and Names

Words have meaning. This is true even if the words are not formally defined; people could talk to each other before dictionaries came around. The facility to speak and understand is so fluid in fully-functioning humans that we underestimate how difficult it is (see Moravec's Paradox). Undoubtedly there was strong evolutionary bias toward creating this ease of communication, as opposed to making our brains facile with long division, for example.

Because words are powerful, they get hijacked. There is economic value attached to the effect of certain words, like "new and improved" and so they get put into use like blue-collar workers marching off to punch in. Sometimes this is manipulative or cynical, as brilliantly illustrated by Orwell in 1984. In the Russian revolution, Bolsheviks were pitted against Mensheviks, names stemming from a narrow vote. The former word means "majority" and the latter "minority." Imagine if your political faction is saddled with with the second name...

In outcomes assessment, or reporting out psychometrics in general, the use of common words is sloppily introduced. I've addressed the big one: "measurement" elsewhere. Another good source for this kind of error propagation is studies that use factor analysis. I came across a good example while reading "A Look across Four Years at the Disposition toward Critical Thinking Among Undergraduate Students" by Giancarlo and Facione while browsing Insight Assessment's research page. This company produces the Critical Thinking Dispositions survey I blogged about here. I don't mean to be critical of the authors, but rather highlight a practice that seems to be endorsed by most who write about such things. The study itself is interesting, giving a before-and-after look at undergraduates as assessed by the survey. They introduce the topic of dispositions thus:
Any conceptualization of critical thinking that focuses exclusively on cognitive skills is incomplete. A more comprehensive view of CT must include the acknowledgement of a characterological component, often referred to as a disposition, to describe a person’s inclination to use critical thinking when faced with problems to solve, ideas to evaluate, or decisions to make. Attitudes, values, and inclinations are dimensions of personality that influence human behavior.
Notice the implication that personality comes in dimensions. Dimensions are by definition independent of one another, and as we shall see, the idea is that we can assemble a linear combination of these pieces to assemble a whole disposition. This is an enthymeme without which the rest of the analysis cannot proceed, but it's a big leap of faith. As such, it ought (in the research community) to be spelled out explicitly. The mindset that attitudes and values and inclinations together create some kind of vector space is so wild that you'd think caution would be advised. If the implication is that these dimensions really are orthogonal (completely independent of one another), it's ridiculous on the face of it. What does it mean to have a very small amount of "attitude" but lots of "inclinations?"

Most things are not linear. If I'm talking softly, you may only hear bits and pieces of what I hear. Increasing the volume will enable you to hear me clearly within a range, but we wouldn't be so bold as to say "talking twice as loud makes you understand me twice as well." We use linearity not because things are linear but because it makes it easy to do the analysis. In small ranges, it often makes sense to approximate non-linear phenomena with linear models, but one has to be careful about reaching conclusions.

In the article, the assumption is that the disposition to think critically is the linear combination of a few component dimensions. These are listed:
Factor analysis of the CCTDI reveals seven distinct elements. In their positive manifestation, these seven bipolar characterological attributes are named truthseeking, open-mindedness, analyticity, systematicity, critical thinking (CT) self-confidence, inquisitiveness, and maturity of judgment.
Notice the passive voice "are named." Are named by whom? Here's the process: The survey is administered and the results recorded in a matrix by student and item. A correlation matrix is computed to see what goes with what. Then a factor analysis (or singular value decomposition, in math terms) is performed, which factors the matrix into orthogonal dimensions. To understand this, it helps to look at an animation of a simple case. If the dimensions have different "sizes" (axes of the ellipse in the animation), then a more-or-less unique factorization results. If the dimensions are close to the same size, it's hard to make that case. Each dimension is defined by survey items and associated coefficients. It supposedly tells us something about the structure of the results. Note that orthogonal means the same thing it did earlier: completely independent. You can have zero of one factor and lots of another, and this needs to make sense in your interpretation.

So: we do some number crunching and find associations between items. These are collected together and named. We could call them vector1, vector2, and so on, but that wouldn't be very impressive. So we call them "openmindedness", "attentiveness," and use words that already have meanings.

It's not even clear what the claim actually is. Is it that we humans perceive critical thinking dispositions as a linear combination of some fundamental types of observation, presumably presented to us in whole form by our perceptive apparatus? Or is it that in reality, our brains are wired in such a way that dispositions are generated in as linear combinations?

It would be relatively easy to test the first case using analysis of language, like the brilliant techniques I wrote about in "High Five." I don't see any evidence that this sort of thing is done routinely. Instead, researchers eyeball the items that are associated with the dimensions that pop out and give them imaginative names. They may or may not be the same names that you and I would give them, and may or may not correspond to actual descriptions that someone on the street would use to describe the test subject.

I hope you can see the sleight of hand by now. In the case of this particular article, the authors go one step further, by describing in detail--in plain English--what the dimensions are (I have bolded what was underlining in the original):
The Truthseeking scale on the CCTDI measures intellectual honesty, the courageous desire for best knowledge in any situation, the inclination to ask challenging questions and to follow the reasons and evidence wherever they lead. Openmindedness measures tolerance for new ideas and divergent views. Analyticity measures alertness to potential difficulties and being alert to the need to intervene by the use of reason and evidence to solve problems. Systematicity measures the inclination to be organized, focused, diligent, and persevering in inquiry. Critical Thinking Self-Confidence measures trust in one’s own reasoning and in one’s ability to guide others to make reasoned decisions. Inquisitiveness measures intellectual curiosity and the intention to learn things even if their immediate application is not apparent. Maturity of Judgment measures judiciousness, which inclines one to see the complexity in problems and to desire prudent and timely decision making, even in uncertain conditions (Facione, et al., 1995).
These descriptions would serve suitably for ordinary definitions of ordinary terms (without the use of "measurement"), but no evidence is presented that the ordinary meanings of all these words corresponds in any way to the factor analysis results, other than that someone decided to give the dimensions these names. The final touch is claiming that we "measure" these elements of personality with precision:
For each of the seven scales a person’s score on the CCTDI may range from a minimum of 10 points to a maximum of 60 points. Scores are interpreted utilizing the following guidelines. A score of 40 points or higher indicates a positive inclination or affirmation of the characteristic; a score of 30 or less indicates opposition, disinclination or hostility toward that same characteristic. A score in the range of 31-39 points indicates ambiguity or ambivalence toward the characteristic.
All of this strikes me as absurd. It's not that surveys can't be useful. To the contrary, they undoubtedly can give us some insights about student habits of mind. But to suppose that we can slice and dice said behaviors with this precision is far over-reaching, particularly in the use of ordinary language to create credibility without proof that these associations are strong enough to withstand challenge.

This practice is unfortunately common. The NSSE reports include dimensions like this, for example.

Sunday, November 08, 2009

Other Reading

Here are some noteworthy articles that speak for themselves:

Strategic enrollment planning:

From Noel-Levitz (papers and reports):
On assessment:
Educational Technology

Pricing Higher Ed

My last post included a link to "Admission, Tuition, and Financial Aid Policies in the Market
for Higher Education
" by Epple, Romano, and Sieg from 2003. In the paper, they test economic models against actual data and reach some very interesting conclusions about how pricing works. One of the assumptions is "In our model, colleges seek to maximize the quality of the educational
experience provided to their students."

I thought about this for a while. It's not obviously true, is it? I'm trying to remember how many meetings I've sat in where someone talked about the quality of educational experience. Of course, in many small ways programs, individual instructors, chairs, and so on do bits and pieces that impact this quality. And the SACS Quality Enhancement Plan is supposed to turn this into a visible project.

But by and large, I think most of my meeting time has been spent on solving problems, grinding away at the routine bureaucracy, or (once in a while) trying to make the bureaucracy work better. Of course, outcomes assessment is supposed to lead to continual improvements in the quality of education, but it would be a wonderful thing if board meetings were opened with the sentiment: we're here to improve the quality of educational experience.

As it turns out, I'm in the middle of a project to improve the "experience" part of that by helping organize strategic planning action items along those lines, and I'm going to start using that language.

In the article, the authors give some dependencies for quality:
  1. peer ability of the student body
  2. a measure of peer-student income diversity
  3. instructional expenditures per student
Quality is relative, and two of the dependencies listed above are intuitive: students don't want to attend classes populated with students who are all less able than themselves. They also perceive the institution's ability to spend money in the classroom. This one is reflected in college rankings too (see "Zza's Best Liberal Arts Schools"), which probably has some affect on decisions. The second dependency, however, is surprising to me.

They see a distinct stratification that bestows economic benefits to the top schools:
Colleges at low and medium quality level have close substitutes in equilibrium and thus a limited amount of market power. Admission policies are largely driven by the “effective marginal costs” of educating students of differing abilities and incomes.

Colleges with high quality have more market power. These colleges do not face competition from higher-quality colleges. Hence, they can set tuitions above effective marginal costs and generate additional revenues that are used to enhance quality.
This suggests a Darwinian struggle for schools at the low and mid-levels of means and quality. In a catch-22, they lack the pricing power to enhance their position much. But once breaking through a ceiling, it becomes easier. At least that's my interpretation.

On the subject of price, the authors illuminate the second dependency (financial diversity):
We also find that colleges at all levels link tuition to student (household) income. Some of this pricing derives from the market power of each college. This allows colleges to extract additional revenues from students that are inframarginal consumers of a college. However, as noted above, our empirical findings suggest that market power of lower and middle ranked colleges is limited. This suggests that pricing by income may be driven by other causes.
I found an explanation of what an "inframarginal consumer" in another source "The inframarginal consumer is willing to pay more for the good than is the marginal consumer." So, if your college has a good market position, you can charge a premium. But the authors argue that that this isn't the whole story:
In this paper, we then also explore the role that income diversity measures play in determining college quality. Our findings here indicate that colleges and students believe that the quality of a student’s educational experience is enhanced by interacting with peers from diverse socioeconomic backgrounds.
Obviously there are many reasons for wanting a diverse student body, but the authors propose to actually use that as a factor that contributes to the price model. This begins to make more sense in Section 6 of the paper, where they verify empirically that college quality increases with income diversity, stating that "To attract students from lower-income backgrounds, colleges give financial aid that is inversely related to income as detailed below." While this is no doubt true for some institutions, others have a more directly self-interested reason for giving need-based aid: to increase enrollment in those students who couldn't otherwise afford to attend. I talked about the revenue-generating effect of this "gap filling" in "The Power of Discriminant Pricing."

Also in Section 6, they make an observation about college size:
Absent scale economies, peer effects and endowments create a force for colleges to reduce size to increase student quality–in the limit maximizing quality by admitting a handful of brilliant students and lavishing the entire endowment on educating those students. The countervailing effect of scale economies is captured in our cost function primarily by the c3 term in the cost function.
This outlines a good strategy for an elite school: keep it small because it's easier to maintain a high level of average student quality, but not so small that the economies of scale drive up costs unreasonably.

A hundred points of SAT is worth between $4688 and $10363 in merit aid (in 2003), according to the model output. The difference depends on what tier of college the applicant applies to.

Conclusions: First, remember I'm not an economist. But the paper is clearly written, and you can skip the mathy bits easily enough. The model presented has errors, as the authors describe, but the approach seems to lead to some insights, like the relationship between size and quality, the effect of financial diversity on institutional quality, and price sensitivity by student ability and income. I have not delved into all of these in my notes above. I don't know how hard it would be to simulate their model numerically to actually use it to build your policies (e.g. by running scenarios), but it's probably worth showing it to your IR office. And if you have an economics department handy, maybe they can shed some light as well.

Saturday, November 07, 2009

Price Elasticity

Soon enough, boards and presidents, committees and task forces, will take up the question of setting tuition for next year. The discussion must vary considerably from institution to institution, but for tuition-driven privates, it's a nail-biting exercise.

The unthinking version goes like this:
President: Well, we have all the budget requests now. How short are we?
Finance VP: We're a million short, after trimming.
President: How much do we have to raise tuition in order to close the gap?
Finance VP: (calculating) About seven percent.
President: Great. That settles that. Next on the agenda is the parking problem.
The list of reasons why this won't work is long. First, one must of course account for additional financial aid awarded when the tuition goes up: probably in the neighborhood of 40% of the gain. But it could be worse than that. It might happen that net revenue actually decreases when one raises tuition. This is related to price elasticity or price sensitivity: how does the demand for education at your fine institution vary with cost?

I have started a survey of ideas out there for approaching this problem, and this post will provide links to some articles. Down the road, I'll try to give more analysis and detail. For several of the articles, you need access through your library to get them.

University Business takes the question head-on with "Research Tools to Guide Tuition and Financial Aid Decisions," from 2007 but still quite applicable. They describe a tuition pricing study:
A tuition pricing study involves a blind survey of prospective students and their parents. Since response rates are better, and the sample can be controlled more carefully, most studies are conducted via telephone.
Here's an old (1995) article called "Tuition Elasticity of the Demand for Higher Education among Current Students" in Journal of Higher Education. An even older (1987) article in the same journal is "Student Price Response in Higher Education."

A 1997 case study can be found in "Some new evidence of the character of competition among higher education institutions" in Economics of Education Review.

There are a lot of old papers you can find on google scholar, but not many recent ones. Here's a magazine article, again from University Business (2003) "Overcoming price sensitivity ... Means marketing affordability, and it's what every IHE needs to do.(On The Money)" that makes an interesting charge:
Unfortunately, the financial aid award letter itself, although a critical component of communicating affordability, comes too late in the process to influence anything but yield on admitted students--significant, certainty, but in many instances, not sufficient.
This is an interesting practical problem, and the article poses some great solutions. Forward this one to your FA director today. Seriously.

If you like economics, you may find this one palatable: "Admission, Tuition, and Financial Aid Policies in the Market for Higher Education," in Econometria (2006). Their models shows "that the model gives rise to a strict hierarchy of colleges that differ by the educational quality provided to the students." Also:
Our empirical findings suggest that our model explains observed admission and tuition policies reasonably well. The findings also suggest that the market for higher education is quite competitive.
It's very dense with formulas. I'll try to read the tea leaves when I have more time.

Thursday, November 05, 2009

Habits of Mind

On Tuesday I wrote in "Learning and Intelligence" about the difference between having the ability to think rationally being different from the habit of using said ability. The source was a fascinating NewScientist article.

Yesterday in an article in InsideHigherEd, Peter Facione wrote in a comment that:
The California Critical Thinking Skills Test (CCTST), used by hundreds of colleges and universities throughout the US and worldwide, has quietly become one of the leading measures of critical thinking skills. Its companion tool, the California Critical Thinking Disposition Inventory (CCTDI), assesses the habits of mind which incline one to use critical thinking in real world problem solving and decision making. Research shows that having the skills to think well and having the disposition to use those skills in key judgment situations is not highly correlated and yet few educational settings are taking this into account.
I do not remember reading about the CCTDI before. Their website describes the instrument as:
The California Critical Thinking Disposition Inventory is the premier tool for surveying the dispositional aspects of critical thinking. The CCTDI invites respondents to indicate the extent to which they agree or disagree with 75 statements expressing beliefs, values, attitudes and intentions that relate to the reflective formation of reasoned judgments. The CCTDI measures the "willing" dimension in the expression "willing and able" to think critically. The CCTDI can be administered in 20 minutes.
You can find a list of abstracts for research using the CCTDI here. There are some interesting bits to read there, like:
Significant differences were detected in critical thinking disposition (CCTDI) between the two groups of students, Hong Kong Chinese students failing to show a positive disposition toward critical thinking on the CCTDI total mean score, while the Australian students showed a positive disposition. The study raises questions about the effects of institutional, educational, professional and cultural factors on the disposition to think critically. [Tiwari A, Avery A, Lai P. (2003)]
You may recall that the CIRP is also trying to do this using item response theory to estimate a dimension called habits of mind. It's described on the HERI website as "Interactions with students that foster habits of mind for student learning."

It seems to me that this an opportunity to change the discussion about general education, using these ideas. If disposition is indeed different from ability, then perhaps the marination of a student for two years in a survey courses ought to be focused on developing habits of mind more than trying to assemble a skills list (like critical thinking). Or in addition to it, if you're a glutton for punishment.

A student who completes a solid degree program is going to come out of it with real analytical and creative skills he or she didn't have before. But the way the curriculum generally work, the way our learning outcomes are sketched, I don't think it's common to address this other dimension: intentionally developing open-mindedness, truth-seeking, systematicity, and maturity, as the CCTDI gets factored.

Wednesday, November 04, 2009

404: Learning Outcomes

tl;dr Searched SACS reports for learning outcomes. Table of links, general observations, proposal to create a consortium to make public these reports.

In grad school there was a horror story that circulated about a friend of a friend of a cousin, who was a math grad student in algebra. He had created a beautiful theory with wonderful results, and was ready to submit when it was pointed out to him that his axioms were inconsistent--they contradicted one another. The punchline is that you can prove anything of the empty set. This sometimes also happens to degree programs that suddenly have to prove that they've been doing assessment loops, except in reverse: building grand theories from the empty set.

I complained the other day the there weren't many completed learning outcomes reports from universities to be found on the web. So when I noticed Mary Bold's post at Higher Ed Assessment "Reporting Assessment Results (Well): Pairing UNLV and OERL" I thought I'd hit paydirt. The hyperlink took me to a page at University of Nevada, Las Vegas with a link advertising "Student Learning Outcomes by College." Without futher ado, here's the page:

That's just too funny. There are, however, excerpts from actual results listed in the teacher education site, which you can find here. That site is the OERL that Mary refers to in her post.

It did make me think, however. There must be a bunch of SACS compliance certifications out there on the web now, and section 3.3.1 (used to be 3.4.1) covers learning outcomes. Want to see how your peers have handled it? The name of the school in the table below links to the compliance certification home page for that institution. For good measure I'll throw in 3.5.1, general education assessment, too. You're welcome.

Institution3.3.13.5.1
Southeastern Louisiana Universitylinklink
Western Kentucky Universitylinklink
Berea College
linklink
The College of William and Mary
linklink
The University of Alabama in Huntsville
linklink
Nashville State Community College
link
link
Mitchell Community College
linklink
The University of Texas Arlington
linklink
University of New Orleans
linklink
Albany State University
linklink
Bevill State Community College
linklink
Louisiana State U. In Shreveport
linklink
Texas Tech University
linklink
Coker College
linklink

I did not try to make a complete list of all available reports. If you find a good one, send me the url and I'll add it. Here's my google search criterion.

Disclosure: I was the liaison, IE chair, webmaster, and editor for the Coker College process (as well as doing the IR and a bunch of other stuff--no wonder I have white hair). The linked documents to that one are turned off for confidentiality, but you can find the complete list of program learning outcomes plans and results here.

Observations:
First, hats off to all the institutions who make these reports public. This is a great resource to anyone else going through the process.

I only scanned through the reports, looking for evidence of learning outcomes. I probably missed a lot, so take my remarks with a grain of salt--go look for yourself and leave a comment if you find something interesting. It should go without saying that in order to be helpful, this has to be a constructive dialogue.

For learning outcomes I didn't find as much evidence-based action as I would have expected from all the emphasis that SACS puts on it. My own experience was that programs were uneven in their application of what is now 3.3.1 (at the time SACS didn't even have section numbers for the requirements--how crazy is that? I invented my own, and then they published official ones just before we had to turn the thing in.). So there was a lot of taking the empty set and trying to build something out of it. That can take various forms, which one notices in scanning certification reports:
  • Quick fixes: use a standardized instrument like MAAP, MFAT, CLA, NSSE. Of course, it's not really that quick since it would take at least a year to get results, analyze them, and use them. The conceptual problem is tying results to the curriculum (except for MFAT).

  • Use coursework: passing a certain course certifies students in X (e.g. use of technology), passing a capstone course with grade X ensures broad learning outcomes. This is fairly convincing as gate keeping, but hard to link to changes unless specific learning outcomes are assessed.

  • Rubric-like reporting. Okay, I'm not a big fan of rubrics when employed with religious zeal and utter faith in their validity. But I have to admit that the most convincing report summary I saw on learning outcomes was the one below from Mitchell Community College. Not all the data points are there, but that's realistic. Take a look.
Of course, this still has to be tied to some analysis and action to get full points, but the presentation of the learning outcomes is clear and understandable. In general, that was somewhat of a rarity in my cursory review. What there is a LOT of is plans, new plans, minutes describing the construction of new plans and goals, assessment forms, models and processes, and generally new ambitions and enthusiasms. There are standardized test reports like CLA summaries, which solve the data and report problem, but don't touch the hard part: relating it to the practice of teaching in a discipline.

I believe that if our efforts as assessment leaders are to be maximally useful, we have to make the annual, messy, incomplete, inconsistent, but authentic program-level plans and results available to the public. This would encourage us to adopt some kind of uniformity in reporting, and improve the quality of the presentation (maybe I'm a fool for saying that). The only downside is that if we're honest, there will be empty sets here and there--programs that have not been dragged into the 21st century yet. But transparency can help there too, perhaps into shaming some into compliance. Just imagine (really dreaming now) if the quality of the reports were good enough to use for recruiting and paint across the program web page.

The Voluntary System of Accountability tries to do something like that. Unfortunately, that group seems to be enamored of standardized tests for learning outcomes. There's a validity study they just published here that you can consider. This post isn't the place to go into all the reasons I think standardized testing is the wrong approach, so let me just leave it at that.

Thinking more positively: is there any interest out there to form a loose consortium of schools that report out annual learning outcomes for programs? The role of the consortium could be to settle on some standard ways of reporting and defining best practices?

Tuesday, November 03, 2009

Net Cost of College Drops

tl;dr Although sticker prices have risen dramatically at non-profit privates, actual average cost has dropped due to institutional discounting.

The College Board's "2009 Trends in College Pricing" (pdf) is a fact-packed publication worth perusing. The narrative in the popular press is by now well-established: tuition keeps rising faster than the consumer price index. Examples:
  • "The Skyrocketing Costs of Attending College" (October 2009) In this article, one's eye jumps to the dramatic graph, reproduced belowThere is a disclaimer that these prices are not what students actually pay, but that topic isn't mentioned again.
Although that turns out to be true that tuition increases have outpaced inflation, one should pay attention to the fine print (quotes from "2009 Trends in College Pricing"). First the bad news:
Published tuition and fees at public four-year colleges and universities rose at an average annual rate of 4.9% per year beyond general inflation from 1999-2000 to 2009-10, more rapidly than in either of the previous two decades.
However,
The rate of growth of published prices at both private not-for-profit four-year and public two-year institutions was lower from 1999-2000 to 2009-10 than in either of the previous two decades.
Once one goes beyond sticker prices and looks at discounted prices, the price increase (on average, at least) vanish:
Although average published tuition and fees increased by about 15% in inflation-adjusted dollars at private not-for-profit four-year colleges and universities from 2004-05 to 2009-10, and by about 20% at public four-year institutions, the estimated average 2009-10 net price for full-time students, after considering grant aid and federal tax benefits, is about $1,100 lower (in 2009 dollars) in the private sector and about $400 lower in the public sector than it was five years ago.
The excerpted graph shows that the dramatic change in sticker price did not affect net price at privates:
(grey is room and board, light blue is advertised tuition, dark blue is tuition after aid)

Where does the aid come from, that makes the difference between gross tuition and net tuition? In the College Board companion report 2009 Trends in Student Aid (pdf), we learn that private not-for-profits are discounting more heavily:
Institutional grant dollars per FTE student increased by 7%, from $1,718 to $1,840 (in 2008 dollars) from 1998-99 to 2003-04, and by 19% to $2,190 over the next five years.
That 19% figure is pretty dramatic. Note that this doesn't mean that the average discount rate increased by 19%, but we would expect a 4-6% increase. The report doesn't directly track that statistic, unfortunately. There is a chart of all aid sources for undergraduates for perspective:


The institutional grants portion lumps together publics and privates, and so doesn't give a good idea of what the discount rate is for privates. For more on that we can turn to a NACUBO publication "Tuition Discount Metrics," where we learn:
In the 1990s and early 2000s, discount rates jumped rapidly. For example, from fall 1990 to fall 2002, the average tuition discount rate (the share of tuition and fee revenue devoted to institutionally funded grant aid) at four-year independent institutions increased from 26.7 percent to 39.4 percent, and the share of first-time, full-time freshmen who received an institutional grant award grew from about 62 percent to 81 percent.
Discounts have stabilized at that level since, the article continues, pegging the 2007 figure at 39.1%. No data for 2008 or 2009 are given in the article.

Note that average costs and individual costs are different things. So even though net tuition costs have dropped at privates (excluding for-profits), the way that happens affects different kinds of students differently; discounts are unlikely to be evenly applied across the board because this defeats the purpose of the policy, which is to engineer the characteristics of an incoming class while maintaining the revenue stream. Often this can mean discounting prices to high-income families because those students are most likely to have high SAT scores. (see "Money, Genes, and College"). More in that theme after I've had more time to dig through the data in the reports.

The story of dropping prices is apparently not the same at for-profits (quote from College Board cost report):
For students at all income levels, net tuition and fees at for-profit institutions increased 8% to 10% per year beyond inflation between 2003-04 and 2007-08, compared to 0% to 2% at private not-for-profit four-year colleges, 0% to 4% at public two-year colleges, and -6% to 3% per year at public four-year colleges.
Notice that is net tuition, not gross tuition. Aid patterns are different too:
In 2008-09, 88% of students enrolled in for-profit institutions used Stafford Loans, compared to 55% in private not-for-profit four-year institutions, 42% in public four-year institutions, and only 10% in public two-year colleges.
It's not surprising that the business model of for-profits would show up in this kind of statistic, and it underlines some of the politics and attention being paid particularly to federal aid and loan programs for the for-profits.

Learning and Intelligence

In "What is Learning?" I had fun with the idea of logic and learning, and definitions of learning. Today I'd like to first illustrate how simple the ingredients for learning can be and then turn to assessment of such things and a surprising twist on that.

First, the article yesterday hinted that there was a resolution to the philosophical problem of how a deductive process can learn. Clearly it's possible because the world around us seems to be ruled by deductive physical laws, and yet we can learn to play music and lots of other cool things. Learning is a prerequisite for survival, in fact, and living things are inductive machines. So how is this possible?

Mathematicians like simplicity, and the perfect example is one that illustrates all the complexities necessary to understand a problem without extraneous details. Donald Michie thought very hard about machines that could learn. Despite having worked with and befriended Alan Turing, and having worked at Bletchley Park to crack German cyphers using the first real electronic computing machines, he didn't have ready access to what we would call computers when he created a simple learning machine from matchboxes. You read that right.

On BoingBoing you can find "Mechanical computer uses matchboxes and beans to learn Tic-Tac-Toe." It uses 304 matchboxes, each labeled with a tic-tac-toe game position, and markers (beans or beads) inside the box that represent the next move to be made. Over many games, the winning strategies are identified by adding or removing beans according to a simple rule. It's brilliant.
The ingredients are:
  1. A deterministic process (rules indicating which matchbox to use next)
  2. Randomness (randomly choosing the next move based on what markers are inside the matchbox corresponding to the current state of play), and
  3. Memory (markers kept in each box)
Memory can be created with logic and time as we saw last time. Randomness popped up yesterday in the proposed solution to the induction problem with the introduction of Bayes' Rule: conditional probabilities. The markers inside each box represent conditional probabilities of the next move given the "priors" -- what has happened already.

Given this, it may not be surprise to learn that there is a market for true randomness. See this service, for example, that advertises bits generated from quantum processes.

Now to assessment. We often concern ourselves with looking at "authentic learning outcomes" from student work, in order to judge how much they've learned. For the most part, we teach them to use tools of rationalism: how to understand and ultimately produce knowledge that fits into the structure of an established discipline. I think we can loosely say that we try to make them more intelligent, or at least enable them to do more intelligent things (in case you think intelligence is fixed and malleable, but I don't know what the difference is).

The Nov 2. NewScientist article "Clever fools: Why a high IQ doesn't mean you're smart" may turn how you think about assessment sideways. The article says of IQ tests that they are good at assessing logic, abstract reasoning, learning ability, and memory. However:
But the tests fall down when it comes to measuring those abilities crucial to making good judgements in real-life situations. That's because they are unable to assess things such as a person's ability to critically weigh up information, or whether an individual can override the intuitive cognitive biases that can lead us astray.
Arguments that IQ is not all there is to intelligence have been around probably as long as the tests themselves, but what I found new in this article was the connection to mental processes that control the use of intelligence.
[U]nlike many critics of IQ testing, [professor of human development and applied psychology at the University of Toronto, Canada] Stanovich and other researchers into rational thinking are not trying to redefine intelligence, which they are happy to characterise as those mental abilities that can be measured by IQ tests. Rather, they are trying to focus attention on cognitive faculties that go beyond intelligence - what they describe as the essential tools of rational thinking.
Here's an example from the article. Consider the logic puzzler:
Jack is looking at Anne, and Anne is looking at George; Jack is married, George is not. Is a married person looking at an unmarried person?
Possible answers are "yes," "no," and "can't be determined." The answer is given at the bottom.

I encountered this line of thought in my survival research too, writing:
But there is a problem with rationality. A perfectly rational being has no particular reason for preferring existence to non-existence. In fact, a perfectly logical being has no reason to do anything. Consider what I call the Decider's Paradox. Our perfectly logical robot is presented with some environmental data. What is its first question? If it has one, it must be "What should my first question be?" Similarly, its second question must be "What should my second question be?" No other types of questions are possible without an illogical answer to the first one. Perfect logic alone is not enough to work with; some kind of emotional state is also needed to allow considered decisions to take place.
Emotional states control when intelligence is used. The article notes that 44 percent of Mensa members said they believed in astrology in one survey.

The implications for assessment are clear. It's not sufficient to know what students are capable of doing rationally; it's just as important to know if they will employ those tools when they need them. The solution to the puzzle above illustrates this very well (I got it wrong). Here's the problem and solution from the article:
Jack is looking at Anne, and Anne is looking at George; Jack is married, George is not. Is a married person looking at an unmarried person?

If asked to choose between yes, no, or cannot be determined, the vast majority of people go for the third option - incorrectly. If told to reason through all the options, though, those of high IQ are more likely to arrive at the right answer (which is "yes": we don't know Anne's marital status, but either way a married person would be looking at an unmarried one). What this means, says Stanovich, is that "intelligent people perform better only when you tell them what to do".
This is another reason to consider metacognition and noncognitives when constructing the desired outcomes of a curriculum. Think about all those ethical reasoning and civic engagement goals. What good is it if students abstractly know what they "should" do, but have no inclination to actually do it?