Friday, February 27, 2009
Collaborative Software
I found one yesterday that beats anything I've seen for sheer simplicity. It's called EtherPad, and it's a free service that allows real-time collaboration with no sign-up. You simply click and start writing.
I've started one as a sample for readers of this post. Click here to see it. Try adding a quote or favorite bit of poetry to the document. If others are online too, you can watch them edit around you; everything is color-coded so you can tell who is doing what. There's a built-in chat also. I started using it yesterday to write a retention document. It works great. There are some limitations, however. There is no text markup for bold, italics, or fonts, so headings are hard to read. Hyperlinks don't link, either, and you can't hide them with HTML anchor tags. Still, this is a fantastic tool. I can see many classroom uses for it as well.
For the musically inclined, there's a nifty Flash-based program called Noteflight for creating, editing, and sharing music. You do have to create an account, but that's takes less than a minute. Once you're in, you create music in the time-honored way of moving notes around on the staff notation, changing pitch and duration, key, etc. The editor alone is pretty cool, but what makes it really come alive is the ability to play the music you've written by clicking the play button. You can see and hear my test composition below.
Finally,there's a very simple productivity application that I discovered (like all of these, on Reddit) called SimplyNoise. All it does is pipe one of three types of noise to your speakers: white noise, pink noise, or red/brown noise. My favorite is the last of these, which seems to be pitched lower than the other two. It sounds like the sea to me. This soothing shushhhhh can mask otherwise annoying sonic clutter in your workspace. It reminds me of another product I blogged about a long time ago here. That product made sounds based on vowels to hide speech around you.
Thursday, February 26, 2009
Ignorance => Meta-Ignorance
It all surely comes down to the continued development of professional expertise of everyone on the job, I think. Encouraging subordinates to challenge our ideas may slow things down a bit occasionally, but in my experience is a good way to improve decisions. Isn't that what academia is all about anyway? You can't create new knowledge without challenging an existing mode of thought or 'best practice.' (The label 'best practice' makes me grit my teeth--surely any practice can be improved, no? It sounds like an admission of failure. 'Accepted practice' is more honest.)
Ignorance is meta is the conclusion of an article in the New York Times' science section from January 18, 2000 called "Among the Inept, Researchers Discover, Ignorance Is Bliss." The article suggests that there is a double-whammy to being uninformed. The ignorant don't know, and they don't know they don't know. That is, they are confident in their knowledge, even when they have little. Author Erica Goode explains:
One reason that the ignorant also tend to be the blissfully self-assured, the researchers believe, is that the skills required for competence often are the same skills necessary to recognize competence.Cornell Psychology professors Dunning and Kruger, who researched this idea, make some interesting points, as quoted in the article:
If you have followed this blog on the topic of noncognitive assessment, you may recall that realistic self-appraisal is one of the predictors of success. On the other hand, the most able subjects in the study conducted by the researchers were the most likely to underestimate their own abilities.
- Not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it.
- This deficiency in "self-monitoring skills," the researchers said, helps explain the tendency of the humor-impaired to persist in telling jokes that are not funny.
- Some college students, Dr. Dunning said, evince a similar blindness: after doing badly on a test, they spend hours in his office, explaining why the answers he suggests for the test questions are wrong.
The researchers attributed this to the fact that, in the absence of information about how others were doing, highly competent subjects assumed that others were performing as well as they were -- a phenomenon psychologists term the "false consensus effect."There is some hope: Kruger and Dunning were able to 'train in' more realistic self-appraisal skills for those lacking them. The problem, they suggest, is lack of feedback. If you're doing a lousy job and no one tells you, how will you learn otherwise? A certain amount of humility is a good thing.
Of course, there has to be a balance. Paralysis through analysis is no good either. Being too timid to act on a new idea because there is no way to find out if it's good or bad prevents real leadership. After all, if all decisions are obvious, why are they paying you that fat administrative salary? Unfortunately, the Total Quality Management model that accreditors are fond of these days assumes that with enough information, good decisions can be made. That isn't always the case--just look at the stock market. A lot of very smart people with a lot of very good information get it wrong about half the time.
Therein lies the key to good leadership: entertaining new ideas on the one hand, but in spite of little information to go on, intuiting which of them are disastrous. I think this is a very rare trait. As Niccolo Machiavelli wrote in The Prince:
There is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success than to take the lead in the introduction of a new order of things.Readers of this column will not be too surprised that I have an anecdote to supply on this general topic. It was first published in a small college literary magazine, probably a decade ago. I will not insult your intelligence by highlighting my own examples of meta-ignorance in the story. You'll find them easily enough.
Geniosity
I’m a genius. Well, perhaps I should modify that statement just a tad. I was a genius. In fact, on two separate occasions during my life, I have had pure Eureka! moments that lifted me from the mundane to the ethereal. The descent was just as sudden, but at least I got a glimpse of what it must be like to be a real genius—you know, the kind that wakes up and goes to bed still in the bug-eyed goddamn I’m smart state. I suppose some people must find it addictive, having instant blinding flashes of that leave them gasping. I wouldn’t want it to happen all the time, though, or I might drive into a tree just as I’d solved the global deforestation problem, for example. All in all, I found it to be quite pleasant, although I noticed right away about how other people aren’t very interested in moments of clarity, unless it’s their own, in which case it’s hard to get them to shut up afterwards. So I got a bumper sticker that reads My dog had its day that I put right beside the Towers will be violatedand Don’t void where prohibited stickers. It’s a little obscure, but I figure that’s okay because obscurity is hard to tell from profundity sometimes.
It happened while I was doing dishes. The sink in my kitchen is divided into two stainless steel basins, with a faucet arm that can be swiveled to either side. Both sides drain down the same pipe. The problem is that every time you turn on the garbage disposer, which is attached to the right side, it backwashes filthy water up into the left basin. Since that’s usually where I place the dishes to dry, it’s a less than perfect situation. Before my instant of genius, I resorted to turning on the disposer in short bursts so as not to give it time to spew much water back up the other side. I had done this for years. But last week, I had finished stacking the last plate into the rack in the left basin, and was contemplating the pool of foaming dirty water in the right basin waiting to be drained when I had my geniosity (one of the perks of geniushood, even if only a part-timer, is the permission to create new words). I realized that if I ran some clean water into the left basin before turning on the garbage disposer, at the very worst only clean water would come back up! It worked beautifully, and I have switched entirely to my new method of draining the sink.
I was beginning to wonder if I’d lost the touch, because my only previous geniosity had occurred when I was in kindergarten, some thirty years before. There was the possibility that that earlier one had been a fluke, but now I’m convinced that if I wait another thirty years something equally profound will occur to me. I’m thinking of starting a newsletter. Anyway, back to kindergarten: it was one of those special days when something extraordinary happens. In this case, we had a magician coming to perform for us in the auditorium. I was hoping it would be the good kind—magicians that do magic tricks, instead of the bad kind—magicians that just play music. It was some time before I realized that musician is a whole different word. We were led into the auditorium in single file, and row-by-row filled up the folding chairs set up on the floor. They started with the back row, and I ended up in the second row, a prime spot for watching the tricks, if they were to materialize. As I planted myself into the child-sized folding chair, I noticed the kids who were being led into the row in front of me. The child about to sit directly before me was Bobbie. This was before last names were invented. Bobbie was a troubled child. He had announced one day on the playground that his real name was Robert, for which the rest of us laughed him to scorn. Really! We might have only been five, but we weren’t stupid enough to believe that you’d call a thing something other than what it was. A Bobbie was a Bobbie, and a Robert was something quite different.
As Bobbie prepared to sit, I had my geniosity. If I were to pull his chair back, I thought, he would miss it and end up on the ground! No sooner had inspiration struck than did I put it into action, and to my amazement it worked! Bobbie plopped right on to the floor, and then looked around with the most bewildered expression, which could be interpreted as how did I miss a twelve-inch wide chair with a six-inch wide butt? His universe had changed forever, as had mine. He had discovered The Unexplained, and myself a profound moral question:
Are some geniosities best left unimplemented?
Sadly, this question hasn’t gotten the attention it deserves, despite my well-received article (cf. “Gravitational effects of translation while sitting,” Kids Today 1969, v102, pp 87-89) which raised the issue. It is even more important today that it was then. Do you think those guys at Los Alamos, mucking around in the desert, really thought they could build an atomic bomb? Ironically, the typical defense used when a geniosity is misapplied is the stupidity claim. I’m ashamed to say that’s exactly what I used to explain Bobbie’s unexpected contact with the floor. The resulting Q&A with my teacher Miss Birdbalm is instructive. My comments are in brackets.Q: Did you pull Robert’s chair out from under him? [Direct question, a tough nut.]
A: That’s not Robert it’s Bobbie! [First attempt—misdirection, obfuscation]
Q: Did you? [Miss Birdbalm was not easily distracted.]
A: Yes, but I didn’t know that Bobbie would sit on the ground. [The stupidity defense.]
Q: Why did you pull the chair back? [Her first mistake, questioning my intentions.]
A: I thought it would help. [Who can argue with good intentions?]
Q: How would pulling Bobbie’s chair back help him? [She’s a gonner now.]
A: I noticed that he walks with a limp, so I calculated the moment of inertia about his ankles and concluded that his head would be whiplashed against the back of the chair upon sitdown. Unfortunately I overcompensated and pulled the chair out too far. Believe me, it would have been worse if I’d done nothing at all. I’ve got all the bugs worked out now... [You get the idea.]
The problem with geniosities is that they are too precious not to be implemented, so that Whoa, I could do THIS!, is inevitably followed in short order by Whoa, I did THAT! Maybe this is the march of progress, but I’d like to propose a moratorium on geniosities until we get this sorted out. Except for mine, of course. I only have helpful ideas now.
Tuesday, February 24, 2009
Categories of Risk
Interestingly, the actual searches for "FAFSA" have declined (I checked FASFA too, a common misspelling). It's amazing how regular the pattern is over time.
Overall trends point down: financials on the decline, and effectiveness of traditional marketing going down with it. These are the times when good leadership is called for. In reflecting on this over the last few days, I recalled another time of uncertainty--during the Iraq invasion, and our erstwhile Secretary of Defense's comment. Quoting from Slate:
The Unknown
As we know,
There are known knowns.
There are things we know we know.
We also know
There are known unknowns.
That is to say
We know there are some things
We do not know.
But there are also unknown unknowns,
The ones we don't know
We don't know.—Donald Rumsfeld, Feb. 12, 2002, Department of Defense news briefing
Some people thought this was nonsense, but it seems like a perfectly good way of categorizing risk to me; there are variances we can bound, and those we can't. For example, based on historical yield rates, we could say that our 'normal' yield rate is R plus or minus 3%. The actual rate is a known unknown, but we can deal with the variability with a reasonably accurate contingency. The current trends create plenty of uncharted territory, however. We have no basis from which to judge variability: an unknown unknown. That's where leadership and vision come in. Of course, those who lead in the wrong direction will possibly be weeded out of the gene pool, but it's still better than doing nothing.
There is missing term in Rumsfeld's algebra of risk. Ironically, it turned out to be the Achilles heel of the Bush administration. I refer to "unknown knowns." These would be things like CIA reports of increased terrorist activity before 911, the fact that the yellowcake pretext for invasion wasn't true, and so forth--knowledge that was there for the finding within the organization itself. This isn't intended to be political--they are just apposite examples in the context of Rumsfeld's remarks.
In the higher education realm, within any institution there are bound to be plenty of "unknown knowns." This is because the structure is very hierarchical and organized into silos of responsibility and knowledge. Economics or finance professors generally have little to do with planning the endowment investments, for example. Statisticians aren't usually invited to predict enrollment. I speak of this solely from my own experience. Maybe yours is different.
We should pay attention to the parents of the incoming class of 2009 and look to broad pools of information, even if these are not organized into our normal bureaucratic categories. Going further, we should create these institutionally with wikis and blogs, with message boards and other forms of sharing opinions that can be done freely in a kind of marketplace for ideas. I figure if you don't have a pile of good ideas left over after you've expended your resources on current projects, then you're not really looking. Even working at full steam, there should always be a wish list of to-dos, so you can apply an evolutionary process of weeding out the low priority ones and focusing on the best ideas.
I've been reviewing an open source project called Cyn.in that proports to be a portal for such things. I have it running on a virtual machine, but haven't really warmed up to it yet. You can see some screenshots on the official website. Some of the integrated elements include blogs, wikis, galleries, calendars, mindmaps, workflow, and indexes to these. It's pretty, but seems a bit slow on my system. A mindmap is shown below.
This is one way to start sharing institutional knowledge. There are plenty of others. The larger point is that it's important to be constantly churning ideas and even-handedly ranking them. We tend to focus on external threats and opportunities, but the revelation to me in my survival research was that internal threats and opportunities are just as important for any organization that wants to survive the long haul. There is, in fact, some chance that any organization will self-destruct due to the unpredictability of its internal processes. Worse, this probability is a true unknown unknown--it logically can't be known. But that doesn't mean we can't look for ways to increase the probability of survival--and looking internally is a good place to find out things you didn't know you knew.
Monday, February 23, 2009
Admissions as Drama
Much of what's in the report is insight into the impact of the worldwide depression on the consumer market. The rise over the last decades in tuition is pegged at 439% since 1982, according to the report, compared to family income median rise of 147%. Even if we assume that discount rates went from zero to 33%, parents are paying far more now than before. And this situation is getting much worse because of the crash in equities and dwindling access to loans of all types.
Parents will do what shoppers always do when money is tight, the report argues: shop around for bargains. Quoting from one of the bullets in the article:
In October 2008, nearly 60% of surveyed high school seniors were considering a less prestigious college for affordability reasons; 14% changed their focus to a two year college; 16% put their college searches on hold. (MeritAid.com)This is not the year to take your yield rate for granted. This is an important calculation many private schools will be making right now. How much do we shave off of our enroll/admit ratio when predicting next year's entering class?
It is already getting pretty late in the admissions cycle, but there's still time to change the road you're on. One of the recommendations of the Lawlor Group is not to take the value of the institution's product for granted. They recommend supporting it with evidence.
If your applicant pool includes first-generation students, the problem is likely to be worse. This is my own speculation, based on research done during the last several years. Lower-income and/or first generation families are likely to complete their taxes later, which complicates an already complicated FAFSA application. Awards can be put on hold while information that was put on the FAFSA as preliminary is updated and verified against returns when they come in. This is another whammy directed at low-income groups, in addition to low ability to pay, correlated lower entrance test scores, and dwindling need-based aid.
The form is supposed to come under review as one of President Obama's intiatives, but that will be too late for the current cycle. In the meantime, companies have sprung up that will help you with the form, for a fee comparable to H&R Block doing your taxes. One of them is www.fafsa.com, which charges $80-$100 to walk you through the form in under half an hour. That price may not be prohibitive to most middle-class families, but it is almost certain to be a limiting factor for some demographics.
Shopping for scholarships is another trend that is likely to accelerate, methinks. This is the time to be creative with institutional aid. Discount rates are almost certain to grow, unless formal tuition cuts become common. Here's a site that lists institutional awards, both endowed and unfunded: www.meritaid.com. Another barrier to unsophisticated consumers of the higher ed product is negotiating (in all senses of the word) the award hussle. Anything we can do to lower barriers and funnel students from one decision to the next, one bit of the bureacracy to the next, is a good thing.
Think of your application process as a movie script, where the FAFSA is smack in the middle of Act II. Is your film a long complicated thing or a fast-moving action flick? Who can best negotiate the plot twists--have you picked the right audience? Some tips come from the Lawlor Group's final recommendations:
Private colleges must continue their efforts to overcome students’ perceived financial barriers to enrollment by (1) demonstrating a commitment to containing costs, (2) making the net tuition price as transparent as possible as early as possible, (3) structuring institutional aid to increase the enrollment of low- and moderate-income students, and (4) implementing programs that narrow completion gaps across socioeconomic groups.This is an outline of the story we have to tell in the 120 pages of our three acts (transforming the recommendations above into a dramatic arc of sorts):
- It's cheaper than you think; you can afford us.
- See, here are the costs.
- And here's how we help you afford it.
- The payoff is emotionally satisfying.
Friday, February 20, 2009
Wednesday, February 18, 2009
The Power of Discriminant Pricing
Now think of the same question in a different context. The public schools that our kids attend typicially get a large portion of their funds from property taxes. But this tax depends on the value of the property, which is a good proxy for ability to pay. Rich folks live in McMansions and pay a lot more than the owner of a more modest home. Is this fair? It's the same question.
It's easy to see the downside (unfairness) aspect of this discriminant pricing, and perhaps that is psychological. But think for a moment about the inherent cost of fixed pricing. Imagine that you want a soft drink, but only have $.50 in your pocket. The machine requires $.55. It won't give you the time of day unless you pay full price. Suppose, however, that it were sentient, and you could negotiate with it. You could sometimes pay a little more and sometimes a little less, perhaps averaging around $.55. Wouldn't that be more efficient for all concerned? You get more soft drinks when you want them, and the vendor has a steadier cash flow. If this were done to all consumers, not just you, the average price of the soda could perhaps even be lowered because of the increased volume and attendant economies of scale. Let's say it's possible to decrease the average cost to $.50.
Taking this one step further, there will be people who can always pay more, and will always be paying, say $.55 for each fizzy beverage. They are no worse off than before the smart vending machines arrived, but lots of people are better off than before. Is this situation not fairer than the one where everyone pays the full original price? That point is perhaps debatable, but it seems a bit mean-spirited to deny the discount to those who benefit most from it.
In practice, this kind of thing goes on all around us. Store issues coupons and have limited-time discounts. Those with less money to spend are more likely to pay attention to these opportunities than those with more money, for whom their time is perhaps more valuable than the savings. The same thing applies to where you shop. If you want an upscale shopping experience, plan to pay more for the same items (or functionally similar, anyway) than you would at Wal*Mart.
The same principle applies to institutional aid. I wrote an article about this some time ago, but had to present these ideas recently in meetings, and so I did a rethink to try to make it more comprehensible. It's hard to explain average probabilities and demographic slices--it all sounds too theoretical.
Imagine that we look at a sample of applicants to our institution and have insight into their willingness and ability to pay for costs of attendance. This is represented in the graph below. Each bar represents an applicant, and the higher it is, the more cash they'd be be willing to cough up to come to our fine university. Generally this will likely look like a power law distribution, but I didn't try to hard to reproduce that here. It won't matter.
In the ideal case, we could use individually tailored aid packages to collect 100% of the area in those bars if we wished. We'll ignore the marginal costs per student in this exercise and just think about revenue. So the total revenue possible is the sum of all the bars, which could only be attained through discriminant (i.e., individual) pricing. What happens if we have a fixed-price model? The graph below shows two variables that depend on the price we set.
If we set the price at zero, everyone can attend. This is the blue line, which starts at 100% on the left, and decreases as the price increases. This is an obvious effect--the higher the price, the lower the attendance with fixed pricing. Revenue (the tan line) is more complicated because it's enrollment times price. This increases to an optimum price, and then decreases again as enrollment drops toward zero.
Notice that the maximum amount of revenue that's attainable in this fixed-price example is between 50% and 60% of the total. That is, by using fixed pricing, we cut our possible revenue by almost half. This is a powerful demonstration of the cost of the inflexibility of a single-priced model. If you think about the kinds of things you can buy for thousands of dollars--real estate, cars, college tuition--these things are generally negotiable. There's a good reason for that, as we've just seen.
The discount rate is the usual way to talk about how much institutional aid is being given. It may be assigned for reasons other than to increase attendance or revenue. For example, applicants seen as particularly desirable may be given 'merit' awards. The discount rate is the amount of unfunded aid given divided by the cost to attend.
A 2006 report from Noel-Levitz puts the average discount rate for private institutions at about 33%. You can also use the IPEDS comparison tool to compare your college's institutional grant average to a peer group you choose (and a lot more).
Tuesday, February 17, 2009
Library Portals
I do agree that it could be made easier, and there are services that can index all of your databases and provide a central search capability.
The portal is especially needed for off-site access, where one must generally authenticate through a proxy server (like EZProxy) to gain access.
In other news, I'm evaluating a workflow solution called PerfectForms. It's Flash-based and very slick. It allows registered users to create forms and workflows. You can attached scanned documents too. Below is an example I made (a work ticket).
I will sign up for the demo account and see how hard it is to create a workflow for an actual process. We are currently reviewing office procedures to find efficiencies, so this is a good time to do such a thing. I'll post a more detailed review here afterwards.
Saturday, February 14, 2009
Blue Hat Syndrome
I had joined the Army National Guard and ROTC for two reasons. One was to help pay for college, but the more important reason was that my two best friends were doing it. And after a summer in Fort Bening, Georgia crawling around in the sand, I was back for round two as an officer cadet in the equally hot, equally humid midwest.
It's natural that uncertainty breeds expertise. Think of patent medicine 'cures' or soothsayers guaranteeing answers to questions. In the absence of compelling evidence, experts pop up like mushrooms. One such expert was Cadet Kane. From my journal:
On a cloudless hot day I perched on a wooden bleacher to listen to a green man talk about machine guns. The topic of his lecture sat on a wooden table on a tripod. He had spoken the same words every day for a lot of days, and they came rolling out of his mouth polished and round, like the stones you find at the bottom of a creek.This legalistic notion is, I think, urban legend, and generally such expertise can be easily disputed by the likes of snopes.com. But there is plenty of snake oil still to be sold anywhere uncertainties and stress are to be found. I related this instance because it's personal and funny, and because it segues into the second half of this tale nicely.
He seemed to never stop for breath, and the combination of his sing-song voice and the fierce sun made staying awake a chore. I pinched myself every few seconds. I hyperventilated. I held my eyes wide open for as long as I could without blinking. Nothing seemed to work, but I managed the appearance of wakefulness.
When it was over, my platoon broke for lunch, and I was able to talk to the other trainees for a bit. One of them was a military genius by the name of Cadet Kane. He gave me one of those "I know something you don't" looks.
"You can't shoot the M2 at people," he said. The M2 was the Army's unimaginative name for the gun we'd been hearing about. It's a monster that shoots bullets a half-inch across. His statement didn't seem right to me. What good is a machine gun if you can't shoot it at people, I wondered.
"It's a violation of the Geneva Convention," he continued. I refused to give him the satisfaction of seeming interested, so I encircled my mashed potatoes in preparation
for a desperate assault.
"Fifty calibre is too large a gun to be used for anti-personnel purposes. You can only shoot it at equipment."
That seemed ridiculous to me. I hadn't joined the Army to become a lawyer! What kind of morons had been at the Geneva Convention, anyway, I wondered. They were probably too busy boozing to get any real work done.
"But you CAN shoot at uniforms!" Cane announced gleefully. "Boots, dogtags, anything like that."
That was too much. "You mean to tell me that if an enemy soldier is wearing clothes, you can shoot him, but if he's completely naked, you can't?" I asked, giving up on my
potatoes.
"Not with the M2," he chortled.
I imagined my position being overrun by a hoard of nude men as I watched behind my lethal, but prohibited, machinegun. What would I do? Violate the sacred Geneva Convention, or let them run on by?
"Although," he said thoughtfully, "they might be wearing contact lenses, and there would be no way to know until they were very close. I'd say you'd have a good case for shooting them anyway, just in case they were wearing contacts."
I figure that the guys with machine guns will ALWAYS find a loophole. Keep your clothes on. Mark Twain was right: naked people have little impact on society.
The blue hats were symbols of leadership during training. The officers in charge rotated the cadets through positions of authority like commanding officer, executive officer, platoon leader, and so forth. The idea was that the trainees would get some experience in leadership positions by having a day on the job. When it was your turn to be executive officer, you'd get a blue helmet with 'XO' painted on it. This contrasted nicely with all the ordinary green helmets around, and looked a bit like the UN had dropped in to observe us. But I found out quickly that to be so annointed with power, temporary as it might be, did not make one a peacemaker. Au contraire.
It became quickly apparent that when a perfectly nice guy or gal (this was coed training) donned the blue helmet, he or she became an instant jerk. I don't mean the kind of crankiness that might be natural when you've come under sudden stress, but rather all-out general jerkishness--yelling at the other cadets, chewing them out in public, strutting around like a peacock, and worse. It's as if Clark Kent became General Patton for an afternoon.
I resolved that when it was my turn to wear the blue, I'd not change my personality. I was inoculated, you see, by seeing the experience of others--this lycanthropic transformation into rabid drill instructor mode.
I failed. As soon as that blasted helmet was on my head, I became as bad as all the others I'd seen. This was not evident to me in the glare of the focus of sudden attention garnered by the 'executive officer' status of Cadet Company Bravo. I was too busy making plans, assigning tasks, worrying about the weather and the food, and what I'd say to the (real) major who commanded our group. But when it was over, I was able to gain a certain perspective and felt ashamed by own transformation--made doubly worse because I'd seen it coming. I wondered for a long time if this sort of thing was inevitable, or if it could be overcome, like exposing a werewolf to ever increasing doses of moonlight to control the disease. I called this phenomenon of sudden authority leading naturally to anti-social behavior as the Blue Hat Syndrome (BHS). I've seen the syndrome exhibited many times since then--including a couple more times in my own behavior.
So to tie together the strands of this story, imagine a blue-hatter who finds himself in possession of momentary power over others--the authority to review work and render judgments--and that the domain of interest is a fuzzy one where real expertise is not available. An example would be, say, the Spanish Inquisition. Another would be accreditation visits for determining compliance with learning objectives.
Uncertainty, as we've noticed, can easily generate a body of 'knowledge' that has little to do with reality, such as with astrology. For example, much of what is 'known' about assessment of learning falls into the same category. We imagine we know more than we actually do, because of the nice theories we've made. We speak of learning outcomes 'measurement' when it clearly is no such thing, and produce nice-looking rubrics and reductionist definitions that would warm the heart of any positivist, but which may have little applicability to the actual problem. Not all problems can be solved through such methods, but they're familiar and seem obviously destined to produce results until one actually tries to do so. This is a bad situation for the ones being reviewed.
The recipe is this. First, we have a good candidate for blue-hat syndrome on campus--someone who is probably persecuted on his own campus for ramming effectiveness planning down everyone's throats--suddenly able to see with sparkling clarity the way things really ought to be. He's prepared with flowcharts and Nichols diagrams spelling out in engineering-like precision the 'conceptual framework' carrying the deterministic guarantee of spiraling success. If only the practitioners at the institution would have the will and perspicacity to carry out this plan.
But they don't. This is because the magic plans and guarantees of success are just markers in a wide plane of possibilities, none of which is certain to lead anywhere useful. Total quality management is a bad fit for an educational enterprise because it's very difficult or impossible to certify what exactly a student has learned, and whether or not it is due to our efforts. Good leadership, vision, and judgment should be more prized than matrices and statistics, but that's not the case. Into this corridor between theory and reality strides our blue-hatted hand of fate.
The theory is seductive. Matrices and taxonomies and definitions and rubrics. It looks like science, and it could be if we were talking about measuring the expansion of the universe by counting up supernovas. But we're not. In the nebulous realm of assessing education, too much weight is given to the accouterments of science, so that it can easily become pseudo-science. But it is the outcomes of science that we are held accountable for. If we cannot show that our 'scientific' process produces results, we find the baleful gaze of the blue-hatted messenger falling upon our IE reports in disdain.
The situation is almost guaranteed to be a disaster. It's a recipe for reports that pretend to do what they're 'supposed to', to a tacit understanding that there is a game to be played, and that the most important factor in accreditation is who is on your visiting team. This is too bad, because the goal is a worthy one. As Yogi Berra is reputed to have said:
In theory, theory and practice are the same. In practice, they're not.My own accreditation horror story is thus. The team member judging effectiveness was clearly affected by BHS. He exhibited the classic symptoms of self-inflation and utter certainty about his mission. Our plans were a mess, and we deserved to be gigged, so I can't fault him for that. But not actually reading our stuff was an insult. Rather, he took one look at our system and pronounced it unfit. He drew for me the 'correct' way to do institutional effectiveness--a flowchart to success. Our theoretical underpinnings, you see, were not of sufficient pedigree to pass the white-glove inspection. There was dust all over our section 2.5 in his judgment. But what really got me was his description of a real event. In his zeal to explain how things were supposed to work, he chanced from the beautiful theory into the ugly practice--his own.
The reviewer's college was a specialized one. They had an orientation course on this specialized subject for first-year students, to introduce them to the culture and prepare them for the subsequent curriculum. To find out whether or not knowledge from the course was retained, the students were tested in their senior year on this subject matter. My blue-hatted teller of this tale leaned close to make sure I got this point: the seniors weren't doing so hot on this test, he explained. He waited for the light to dawn in my eyes that this was a matter of institutional effectiveness. Then he went on, describing how they'd researched this problem carefully, figuring out how those test scores could come out better. The conclusion: teach this specialty course in the second year instead of the first. And you know what? It worked--the seniors scored higher after that change was made! The zeal practically dripped from his trembling lips as he concluded his description of the case study--this examplar of institutional effectiveness. I just love this stuff, he said.
Maybe he was right. Maybe his system, with its glossy flow of reports, was better than our system. But what's certain is that the system for accreditation itself--of marrying a BHS situation to a flawed epistemology--is one guaranteed to create not a culture of assessment so much as a neo-scholastic culture of procedural adherence and nit-picking.
Update: A friend emailed me an appropriate quote attributed to Kurt Vonnegut. "Be careful what you pretend to be, because you are what you pretend."
Update 2: See Dostoevsky's description of BHS from House of the Dead here.
Thursday, February 12, 2009
More Critical Thinking Debate
At the core of the problem is an issue that hasn’t been discussed much when it comes to measuring learning outcomes in the higher cognitive skills: We literally don’t know what we’re talking about.The argument recapitulates a running one I've presented here: testing is easy for analytic/deductive processes, hard or impossible for inductive/creative processes. What's more interesting is the long list of comments on the article--the beauty of the internet. These run the gamut from blaming the faculty to "of course it can be measured" to the interesting notion that the faculty themselves can't think critically, so why should they be expected to teach such a thing?
[...]
There is widespread agreement that “critical thinking,” for example, is terribly important to teach: the term pops up in nearly every curriculum guide and college catalog. There is no agreement, however, about what critical thinking is.
It's interesting to view this debate through the lens it attempts to focus: does this debate demonstrate critical thinking? Much of the debate embodied in the comments is speculation and opinion. I think we can agree that doesn't qualify. There's a claim or two of "such and such is supported by the evidence," but the hyperlink provided is tangential or lacking.
One comment author with some empirical evidence to bring to bear is (self-named on the comment form) Robert Tucker, President of InterEd, Inc. The conclusion of his experience in trying to build critical thinking tests is that:
Critical thinking is not so much a construct as a family resemblance concept (i.e., there is no single criterion common to all cases we want to call “critical thinking;” instead, like a braided rope where no single strand runs the full length, individual criteria are shared with a subset of cases with considerable overlap across various criteria and cases). I have come up empty whenever I have attempted to find a single non-trivial, non-tautological facet of critical thinking that all cases of critical thinking have in common.This adds another dimension to the 'I know it when I see it' definition of critical thinking and its ilk--it isn't even the same thing to everyone. You may think that X is evidence of critical thinking, and I may disagree. In fact, one can argue that we should be teaching our students to be good citizens by thinking critically about civic engagment, including assessing who is the best candidate. But first we'd have to agree on the process and outcomes. So theoretically, we could present an array of facts, test students on whether or not they judged the correct candidate to be the best choice. This gives rise to a couple of questions:
- Are we so sure that this magic process exists, of finding the best solution to a fuzzy problem?
- If so, then why are we so hesitant to apply it and advertise the results?
Big fuzzy problems don't admit nice neat solutions. Evidence of critical thinking is not found in the solution, but the methods of approach, which are not guaranteed to work. So forget about the outcome, this demonstration of critical thinking. It's much better to focus on building thinking tools individually: tools and techniques that can be used in appropriate context. Higher education has bitten off more than it can chew with this unfortunate idea of teaching "critical thinking."
Wednesday, February 11, 2009
openIGOR update
A partial screenshot of the GUI interface for effectiveness planning is shown below.
I've decided to try to use the goals and objectives reporting function to manage projects as well. A simple project planning sheet can be filled out and put in the archive:
Project Planning WorksheetThese can be linked to objectives with hyperlinks to the repository from the objectives fields. This has the advantage of automatically documenting the TQM model that our accreditors require while achieving the practical function of managing the tasks we need to do anyway. The only thing missing is a separate task management system: projects are often broken up into tasks.
Project title: Date:
Complete by:
1. Scope: What are we trying to do, and how will we do it?
a. Who is the project lead?
b. Who are the team members, and what are their roles?
c. What external constituents do we need input from?
d. Describe the project’s desired outcome
e. What are limiting factors or potential barriers to completion?
2. Time: When do we want it done, and what milestones can we schedule?
a. Planning complete: _______
b. Execution complete: _______
c. Testing complete (if required): ______
d. Project closed: _______
3. Cost: What budget amount and line do we anticipate using, and when will we need it?
a. Personnel costs
b. Capital expenses
c. Operational costs
It does occur to me, however, that any form-based document could be easily handled within the system to create a mini-workflow system. The forms would have to be hand-built HTML documents, but the data transactions from them could be handled by the IGOR system. I've built such a thing before, and it was very useful for handling customized assessments of portfolio work. Stay tuned...
Tuesday, February 10, 2009
The Ugly Truth about Aid
This admittedly contrived example mimics what I've seen around the table during planning meetings. There never seems to be enough money to do what a college needs or wants done, so if the state raises aid by $500, guess what the revenue discussion quickly leads to. An article in The Chronicle puts the point on it:
Sen. Lamar Alexander, a Republican from Tennessee, former education secretary, and former college president, said many members of Congress worried that increases in Pell Grants were too often offset by increases in tuition.It surprises me that it wouldn't be the standard assumption that this would happen at private institutions that are tuition-driven. The argument would go like this:
We currently have N students, paying T tuition each, and receiving an average aid package of F. If F goes up $500, we can raise T by $500 and be in the same market as before. The current students won't notice, and we haven't put ourselves at an additional market disadvantage by raising costs. In practice, of course, the tuition may go up by more than $500, but the same logic applies.
In effect, increases in state and federal aid subsidize new expenses at institutions, or help them cope with new costs. I would argue that the last decade has seen a growth in administrative functions and costs, an increase in projects not directly related to the classroom, and generally a mission creep away from the core educational product. This is not just because of increases in state and federal aid, but also because of the availability of loans.
The stimulus package may provide some hope of another cash infusion from the government. I think it's best not to be fooled by this false spring. I see the current financial storm as something like the K-T boundary in evolutionary history--a time where most of the existing biological designs didn't make the cut. The ones that did (including our ancestors) flourished. From the article:
"Use this time of retrenchment to try something new," Mr. Alexander told the gathering. "You'll probably have to anyway because of the way the economy is."This is excellent advice--rapid evolutionary changes are coming to those that will survive. It's a good time to be nimble. Machiavelli noted that during stressful times, one can make radical changes. Okay, he was talking about wiping out the opposition after a rebellion, but the same principle applies. Good, strong leadership will be the key to the transitions ahead.
My recent research project on survival leads to a novel conclusion: we can't be solely concerned with external threats. Internal threats are just as bad--particularly the way in which we make decisions. Free exchange of ideas and ability to challenge the boss on a bad idea are essential to this kind of ongoing internal audit.
The only thing I take issue with in the last quote above is the use of the word "retrenchment." I think a trench is the wrong image to conjure. Trench warfare gave way to the Blitzkrieg, and that's a better image: mobility is better than stasis.
Monday, February 09, 2009
Beyond the Big Test
Well, silly me. Apparently this subject has a long history in the literature and is a quite well developed concept. Some major schools like North Carolina State University have used these methods successfully. So my research program to find variables for use in predicting academic success can accellerate considerably--we just have to customize the work others have done.
I have not finished the book, but can tell already that it's a wonderful resource. The background and history of noncognitive assessment is given, as well as solid research findings, actual survey instruments, and examples of how to coach staff in looking for these traits in interviews, on existing application materials, essays, etc.
The focus of the book is toward evening the playing field for what the author calls non-traditional students. In his usage, this means anyone who isn't a white male. My purpose is more targeted, but the material is no less useful for it.
The specific noncognitive traits identified, with some help from factor analysis in the process of ascertaining construct validity of surveys, is as follows (pg. 7):
- Positive self-concept
- Realistic self-appraisal
- Successfully handling the system
- Preference for long-term goals
- Availability of strong support person
- Leadership experience
- Community involvement
- Knowledge acquired in a field
I'll be recommending that our university proceed full speed ahead with this project, to catch what we can of the current cycle. This will undoubtably make Mr. Sedlacek happy, as it will entail buying many more copies of his book for distribution.
Saturday, February 07, 2009
A Category is a Convenient Fiction
- Theory of evolution
- Game theory
- Information theory
Evolution in the broad sense means that things are the way they are now because of processes and conditions that have existed in the past. Ontology recapitulates philology, perhaps not in the literal biological sense, but in essence. This perspective allows one to look under the surface of things and ask the five whys about an existing situation. This is a kind of archaeological dig, or sequencing of DNA (whichever metaphor you prefer).
The results of evolution are messy, redundant, and complex. There's a good reason for this (see my article on survival, for example)--it works. The tax code is a good example of an evolved artifact. Organizational structure is another.
Game theory [wiki] is a master key for helping to understand human interactions. In combination with evolutionary ideas it's even more powerful (see The Selfish Gene by Richard Dawkins). Understanding a process like budget allocation is much easier, I believe, if one thinks of the transactions as formalized in a game theory context. For example, no matter how idealistic a process for dividing the loot is, it will always be easier to give away money than to get it back. Therefore budget directors will always be trying to pad their budgets. Rather than try to combat this, it's better to put people you trust in charge. This is the same debate as planned economy vs free market on a micro scale. (Maybe that's a bad example these days.)
Information theory has many aspects, but the most charming is the idea that in a given language some statements are concise and others are verbose. This is probably obvious, but what may not be so is that we can actually measure this. In common language, frequently used words tend to be short (yes and no, for example). Imagine if 'the' was a three-syllable word!
Bureaucracies speak in languages with processes and documents comprising the grammar and vocabulary, and in conjunction with local languages to facilitate transactions. If I say a student was admitted provisionally because of his high school GPA, someone familiar with higher education will understand because of his/her familiarity with the semantic field of higher ed speak.
Some of the lessons I've learned from using information theory in the wild are:
- Generally, simplicity is better than complexity. Complexities are inefficient, and as a result there is an evolutionary tendency to simplify them through compression. As a trivial example, I started calling the "Administrative Council" AdCo in my emails because I was tired of typing all that. More serious examples include abbreviating some work flow process to cut out complexities. Unfortunately, these may be important, and its better done deliberately than through the whims of evolutionary practice.
- As a corollary, it's nice to be able to recognize randomness when one sees it. Data is random if it can't be compressed further. Usually we reserve the word for high complexity randomness. For example, recognizing that the fate of students who come to the university is largely random is important. More about that in a minute. I would also argue that the result of most committee interactions is largely random. Different people at a different time would produce different results most of the time. This realization leads me to believe that committees are best employed when one wants to get something/anything done, and NOT when one wants to get a particular thing done. That provides a natural litmus test for assigning a group of people or a single director to a project.
- Finally, understanding a bit of information theory allows one to recognize when data has* been compressed. I'll explain further.
A useful idea is that of low vs high frequency components of a message. This is intended to be a metaphor here, but it's very literal in the world of communications engineering. Low frequencies are the bits that don't changes much. You're safe in assuming them. If I saw "I saw a cat" you'll imagine a four-legged critter of a particular shape, with fur. That's the low frequency component. If you want to talk like a geek you can call it the "DC bias." The high frequency bits are the details. You might ask me what color or how big the cat was, or what it was doing, or where I saw it. This is the detail that gets compressed away when we want to communicate quickly. Some people, unfortunately, have a communication style that ladles on information regardless of frequency. This kind of thing: "I was going to the mall, driving on the interstate, when the phone rang. It was Lois. I talked to her for a bit about the birthday plans for Saturday, and when she was about to hang up, the car started driving funny. So I pulled over and found I have a flat tire. Can you come and get me?" I'll leave it to the reader to distinguish the DC bias (the most important bit) from the high frequency chatter.
It's not by coincidence that high frequency information is often referred to as noise. The signal to noise ratio is a a measure of how much patience you have to have to get what you're interested in.
Categories are convenient fictions because they summarize a description of data into compact form. A category is the DC bias for some more complex concept. And categories (or nouns in general) are all hooked together so that it's easy to make predictions or reach conclusions given some simple set of information. For example: "Johnny is a lazy student." Reading this, we can reach all kinds of natural conclusions about Johnny. He's less likely to be on the Dean's list, probably doesn't spend much time studying, etc. We extrapolate all this high frequency stuff from the simple description. It may not be true.
Therein lies the rub. What if the DC bias isn't very large in comparison to other parts of spectrum? That is, what if the high frequency bit is more important than the bias? Here's a very practical example.
A student who fails to meet admissions standards can sometimes be admitted as a provisional student. Typically these are watched carefully and given lighter loads that regularly admitted students. The provisional students are put in this category because of their high school grades, ACT/SAT or other information used by admissions. But how important is the information contained in this categorical description, compared to all the other things we might want to know about the student?
One form of validity for a concept is predictive validity. Does the application of this label to this situation help you understand its future evolution? Does an astrological sign help you understand the professional future of a student? No--this method has no predictive validity. So the astrological sign as a low frequency piece of information is useless. But because we're so used to dealing in compressed data, it's easy to fool ourselves. Back to the example of the provisional admit.
In fact, about half of the provisional admits (in my research) will do quite well. And if you compare them to the regular admits at the bottom end of the qualification range, their futures are comparable--about half of the latter will fail. So the information conveyed by the category "provisional" isn't nearly as useful as it seems. The reason is that there is little predictive validity for the term. High school grades plus standardized test scores might, on a good day, explain 30-40% of first year grade variance in students. What about the rest? This is high frequency data missed by assigning the category.
With this perspective, a better solution is obvious. Instead of creating special programs for provisional students, half of whom don't need it, and missing all the regular admits who do, it's much better to have a solid intervention program that can accommodate any student in trouble. After all, it is more likely to be for non-cognitive reasons that a student fails (the 60-70% of unexplained variance). A bad roommate, sour family situation, poor work habits, etc. may be more important. It's just as valuable to catch the 1800 SAT student who has problems as it is the provisional one.
The general lesson is to occasionally look under the hood of categorical descriptions to see how important that DC bias really is. It's convenient to talk in compressed symbols, but maintaining a somewhat skeptical attitude about conclusions based on this syntax is recommended. Keep in mind that the predictive validity may be quite low, and it may serve your purpose to delve into the high frequency part of the spectrum now and then.
There is a downside to this approach. Debating predictive validity is sometimes done unproductively in committees--especially large bodies like a faculty senate. There's a tendency to examine details when they're not important. A good policy may be debated to death and tabled because of a random impromptu investigation of all possible contingencies. This is paralysis through analysis. You can 'what-if' any proposal to extinction--it's an easy game to play, and a common political tactic to delay or kill one. I don't know of a good defense against it, but at least I can recognize it for what it is: noise.
* Yes, 'data' is plural, but nowadays it's interchangeable with 'information' which has no plural, so what are we to do? "Data is" sounds better to my ears than "data are." Don't hold me to that, however. As Emerson noted, a foolish consistency is the hobgoblin of little minds. As a humerus aside on the topic, I saw an advertisement for a GPS once that boasted that the unit could hold "1024 datums".
Friday, February 06, 2009
A Useful Planning Technique
In the breathing in phase, one gathers information and opinions without judgment. There's a particular kind of broad review of a domain of interest I've learned about from my good friend Jon Shannnon. I have interpreted it here, and may have put an unexpected twist on his methods, so if it sounds dumb, it's my fault and not his.
The idea is to identify interested constituents and their goals. In practice, I ask a small group to take a couple of minutes to write down all the groups who have something to do with whatever the subject is. For the purpose of illustration, let's imagine we're wrestling with the general education program at a university, and trying to envision possibilities. Interested parties include students, faculty, administration, potential employers, etc. We combine our lists on a whiteboard by writing the names across the top to make columns. Inside the columns will go the goals or particular interest of each one. The result would look something like this:
This of course, is just an abbreviated example. If you did this with a group, your list of constituents would probably include your accrediting body, the administration, and others. The goals list would be different too, depending on institutional culture. Obviously, one opportunity at this point is to get some of the constituents involved in the process and ask them what their goals are, rather than imagining them yourself. Once all this work has been done, you can find some interesting things out by looking at the matrix.
One pointer Jon gave me was to look for collisions: goals that point in opposite directions. If there aren't any of these, it may be possible to satisfy everyone's goals simultaneously. In this case, if students really want minimal requirements in a general education curriculum (so they can get on with the major, presumably), and the faculty want a 'great books' sequence or something, there's a tension there to be resolved. Often such tensions involve money. Parents want to pay less, but the administration would like more net revenue. Faculty want release time for research, but the chair needs to cover all the courses with a limited adjunct budget.
Alignment of goals is another pattern to look for. In the example given, everybody wants to be able to schedule courses reasonably. This goal often leads to practices like reserving slots for new students so that returners don't consume all the available seats in COM 101.
These considerations can lead to very interesting discussions, if one has the right group of people around the table. I had thrown this particular example together without much thought, simply to use for illustration in this article, but now that I'm looking at the matrix, something very interesting occurs to me. It's probably not coincidence that it's related to yesterday's post on deductive vs inductive thinking.
Consider the students' desire to understand relevance of the material juxtaposed to the faculty's desire to teach general thinking skills. What we hope for as faculty is actually the development of the inductive/creative skills that I've written about several times. Typically, these students have been subjected to a curriculum and standardized test regimen that heavily emphasizes analytical/deductive thinking. That's what they think education is about, likely. And the first thing we want out of them is to demonstrate and develop inductive/creative thinking in a general education program. See the problem? Perhaps what we should do instead is emphasize the parts of the curriculum that look most like high school: math and science, foreign language, writing correctly, how to give a speech, etc., and ease them into the parts that demand more imagination and generation of connections. I'm thinking of literature, history, and the rest of the humanities. Ideally these courses do not comprise a list of facts to memorize, nor processes to master, but sophisticated connection-building and pattern recognition.
It would also help, as I argue relentlessly, to point out to students what we're doing. We can explain what the different types of knowledge are, and why courses are scheduled in the order they are, what problems to expect, and where to get help.
Forgive my digression into a familiar topic. The matrix is perhaps a rorschach test, for one to see what one wants. But a useful one. With the right discussion group and good leadership, the two dimensions of constituents and goals is a great way to begin breathing in the big picture in order to do strategic planning.
Thursday, February 05, 2009
What's Wrong with Standardizing Minds
With ADS, you get minute-by-minute teaching guides, thousands of practice tests, and other materials for turning your school into a 21st-century test-preparation factory.The whole thing is pretty cringe-worthy, because like any good parody it's not far from truth. For example, the SAT critical reading section gives the test-taker a chance to show how well she understood what she read. From the College Board's SAT test preparation web page:
Here, 'critical reading' has been reduced to simple understanding of what you've read. No wonder, since actual critical thinking is such a mess to assess.There are two types of multiple-choice questions in the critical reading section:
- Sentence completion questions test your vocabulary and your understanding of sentence structure. (19 questions)
- Passage-based reading questions test your comprehension of what is stated in or implied by the passage, not your prior knowledge of the topic. (48 questions)
So it's easy to take shots at the current methodology of mass production testing, which I refer to sometimes as neo-phrenology, but is it so easy to put your finger on the exact problem? Obviously, we want to know whether or not the students are learning, and so we test them. If the tests and test conditions are all the same, then it's fairer than testing willy-nilly. Two short steps, and we have arrived at industrial testing. So where did we go wrong?
The problem is a nasty little enthymeme.* Instruction and testing, as I experienced it in school, and as my daughter is now, focus primarily on analytic/deductive processes. That is, low-complexity thinking. Let me explain.
Suppose you take a quiz on US state capitals, and you know 49 of them. How hard it is for you to figure out the 50th from your knowledge of the 49? It's clearly impossible--there is no information in the first 49 that contributes to the knowledge of the 50th. In informational terms, the names are random. This just means that you have to know it because it's not figureable out. In other words, you can't get there by generalizing.
Much of education is of this sort. What date was the Pearl Harbor attack? You either know or you don't. It's not like you can go scribble in the corner for a while and come up with the answer through sheer mental effort.
Some problems are solvable from given information, of course. This is largely in the sciences, where some raw materials are given, and the solution to a problem is latent in those materials. Chemical reactions, physics modeling, math word problems, and the like all depend on more advanced forms of deductive reasoning (or analytical thinking if you like). This can become quite sophisticated, such as solving systems of linear differential equations by finding eigenvalues of matrices, and so forth. But this is still not generalizing, it's just applying more complicated methods of analysis--the rules are more dense.
The hidden assumption inherent to standardized testing is that if students are good at analytical/deductive thinking, they will be good at creative/inductive thinking that leads to insightful generalization. That is, that students who test well will be good at generalizing problem solving. This skill is much harder than analytical reasoning, because there is only trial and error, seasoned by experience, as a guide. And yet the ability to solve problems creatively, generalizing from what one already knows, is arguably the most prized kind of thinking; this is where new knowledge comes from.
In science class, students sometimes taught the scientific method by looking at information and forming hypotheses. Literature classes are hard because of the fuzziness inherent in generalized, creative thinking. Same for geometry, where analytical methods are taught as a conduit to creative proof construction. These are valuable experiences because they teach new modes of thought, essentially finding patterns and exploiting them.
It's hard to use a fill-in-the-bubble test to look for generalized thinking because the answers are right in front of you. The whole point of a creative endeavor is that the answer isn't given, and may not even exist! Even free-form elements like writing a short essay are subjected to analytical measures when scoring in a standardized context. How do you teach a computer to measure creativity, after all?
By placing so much importance on standardization, we emphasize rote problem solving and step-by-step methods--the hallmarks of deductive reasoning. This is not unimportant--one does need to know ones times tables--but it's setting the bar too low. Conveniently, it's far easier to assess and teach analytical skills. Unfortunately, it misses the larger point. We need thinkers who can do the following:
- Look at what's given and recognize connections because of analytical training
- Form ideas about patterns that may exist and test these guesses against what is known
- Generate paths of inquiry to find new information that solidifies or overturns existing knowledge
*I think I may have gotten that line from Giles Goat Boy (another parody of higher ed) by John Barth. Highly recommended.
Wednesday, February 04, 2009
Alternatives to Lecturing
The lecture system was crazy for teaching organic chemistry. What are professors doing in a lecture? They’re outlining and explaining the important points (and wasting time mentioning even obvious points) of the text on the blackboard. But why? Gutenberg invented movable type. That made printed textbooks available 500 years ago — even now in chemistry rather than alchemy! Students don’t read them? Of course not, if the whole course is dependent on what the prof puts on a blackboard! Students can’t pick out the most important ideas and facts from a 500-page text (in 1948, or thousand-page now) by themselves. They’re beginners.This reminded me of something Bertrand Russell said in his biography, which my wife gave me for Christmas a couple of years ago. I can't find the exact quote, but he said that at university he never learned anything from the dons, and resolved that if he became one he would not expect his students to learn anything from him either. I think he was referring more to content than style, but both are clearly important.
The resignation to the students not reading the text is one I've experienced in my own career over the last couple of decades of teaching college math. A textbook literacy project, perhaps run by the school library, might be a way to approach this. Or a general reading campaign (here's an amusing take on that: faceabook).
The solution to the lecture/textbook problem is summarized by Dr. Lambert thus:
[W]hy not give them something a bit better than the [class] notes on the day or the week before the class, not really an outline of the text but more of a guide to what’s important and what’s not in each day’s text assignment. Then the students could read a day’s assignment and know what to look out for as the key points, realizing that the professor is not going to outline it on the board. Instead, she or he will explain in detail a few complex things in the assigned pages, answer any questions about them, and show how to conquer problems like those in the text, always open to questions and for back and forth with students.This is an argument for a more engaged style like that of the vodcasting approach. The two are quite similar, in fact. The main difference is that a static outline has been replaced by video. The advantages of the second, to me, are that actually hearing words is better than reading them (for evolutionary reasons), and the animation possibilities inherent to the video medium are superior to plain text. Combined, the vodcasting approach has significant advantages for delivering information. For indexing material and outlining the important bits, I can see where a static outline would still be a great thing to have.
How might you try this out? This is a question I'm tossing around. There are the funding and nuts/bolts questions: how to actually record and distribute the material, train the professors, and so forth. Also is the need for a local champion to take on the project. Finally, one would like to assess the results of this, especially given all the time and expense involved. For the last part, Dr. Richard Hake [blog] has long advocated using pre/post tests for the sciences as a way to demonstrate accomplishment, and the research seems to support this position.
I spent some time Googling "alternatives to lecturing" and sifting through the results. Much of it is fairly obvious: use discussion, debate, Q&A, problem-solving, and so forth. More interesting is the idea to use simulations in class. This probably works best for technical fields, but has some advantages. In my experience, simulations can:
- Teach deep connections with directed 'play'.
- Teach software tools used in the profession
- Teach secondary skills like programming
- Link to coursework in an obviously applied setting
Perhaps the real problem with lectures is that they don't engage the learning part of our brain. How do we learn? By trying things and making mistakes until we get it right, I would say. Simulations and similar types of software can provide that.
In an other years-long project, David Kammler at SIU-C and I developed a software package for Fourier Analysis, which can be used to 'play' with the ideas. Here's an example I used in a grant application:
Load a vector, traced from an image. This is a complex list of values (meaning real and imaginary parts) plotted on the complex plane in the usual way.
Next we use a technique that would be learned in the course. We want to compress the information in the vector (the drawing) by looking at its frequency components and removing most of them.
The top part of the graph is the real part, and the bottom is the imaginary. The squiggly lines show the magnitudes of various frequency components. Most of the information is in the low frequencies, so I have zeroed out the higher frequency data to compress it. This is called a low-pass filter in engineering. Now we imagine sending the compressed vector of frequency data to our friend, who isn't fazed by the squiggles. She's had Fourier analysis, and knows that she should unscramble them by using the inverse Fourier Transform. Because we removed some of the information, the reconstruction won't be perfect. Here it is.
This all takes no more than 10 seconds to do, so the try/response cycle is very quick. The effect of the trial is obvious. One can easily go back and try different filter widths to see how much the quality of the final image improves. It's all very fast. In my mind I equate that with the speed of learning; the faster we make mistakes, the better.
An ideal program might be outlined like this:
- Facilitate the creation and use of vodcasts with a trial group of instructors, providing technology and support, probably through the library in combination with faculty development leadership
- Help the faculty member develop active classroom strategies to supplement the vodcasts
- Outline and index vodcasts, and put the technology in place to deliver them over the web
- Provide a textbook literacy program at the library and encourage use by the target group of students
- Mate the program to a software package that can do simulations quickly and easily
- Assess with pre/post tests on content
-
The student/faculty ratio, which represents on average how many students there are for each faculty member, is a common metric of educationa...
-
(A parable for academic workers and those who direct their activities) by David W. Kammler, Professor Mathematics Department Southern Illino...
-
The annual NACUBO report on tuition discounts was covered in Inside Higher Ed back in April, including a figure showing historical rates. (...
-
Introduction Stephen Jay Gould promoted the idea of non-overlaping magisteria , or ways of knowing the world that can be separated into mutu...
-
In the last article , I showed a numerical example of how to increase the accuracy of a test by splitting it in half and judging the sub-sco...
-
Introduction Within the world of educational assessment, rubrics play a large role in the attempt to turn student learning into numbers. ...
-
I'm scheduled to give a talk on grade statistics on Monday 10/26, reviewing the work in the lead article of JAIE's edition on grades...
-
Inside Higher Ed today has a piece on " The Rise of Edupunk ." I didn't find much new in the article, except that perhaps mai...
-
"How much data do you have?" is an inevitable question for program-level data analysis. For example, assessment reports that attem...
-
I just came across a 2007 article by Daniel T. Willingham " Critical Thinking: Why is it so hard to teach? " Critical thinking is ...