Tuesday, September 29, 2009

The Future of Self

Self-publishing, that is. The publishing is implied.

The printed word used to be a very primitive organism (photo courtesy of NASA).


Any phenotype tended to mutate quickly due to copying mistakes, assuming it could reproduce at all. The best a text could hope for was to hang around a monastery in hopes of hooking up with a monk for an act of reproduction that was painful for both parties. It was fragile and always on the brink of extinction.

Technological revolutions changed things: reproduction became easier and fidelity nearly perfect. But still there were economic constraints to thoughts becoming printed words. So the smart money generally kept the gate, watching over the memes in their printed bodies. The word for passing the threshold was "pubplyschyd." It just means to make public.

For a while longer, people will still think of paper and binding when you say "book." Paper books are like the cassette tapes of a bygone era of music distribution. They're expensive, bulky, and have to be hauled around the country in trucks. If you want to sell them, you have to rent a big building and buy a lot of shelves and put in a coffee shop. We make a lot of them. According to Bowker, in 2008 the US produced 275,000 new titles and editions.

But the way things are going (digital, instant, free) books and shorter texts will tend to be published as ethereal bits and bytes. The Internet has made publishing so easy your grandma can do it. According to blog central Techonorati.com, there are over 200 blog posts a week tagged "grand kids." True, I don't know they were all created by mother's and father's mothers, but it's a good guess some are.

By contrast, the number of blog posts tracked by Technorati with tags "music" or "tv" or "book" has hovered around 11,000 per week in the last few months. See the graph below.

You can click on the graph to go create your own version.

Technorati also reports that there are about 900,000 posts every day on blogs that they track. Just for fun, assume that a thousand blog posts equals the textual mass of an average book. That's about 329,000 blog-books per year (not all in the US). From the Technorati state of the blog-o-sphere page:

The blog has forever changed the way publishing works —now anyone can be a publisher. The issue is no longer distribution; rather, it's relevance. --Brad Feld, Managing Director, Foundry Group

Longer works, too, are easing past the traditional barriers to publishing. From Bowker, which calls itself "the world's leading provider of bibliographic information management solutions...":
Our statistics for 2008 benchmark an historic development in the U.S. book publishing industry as we crossed a point last year in which On Demand and short-run books exceeded the number of traditional books entering the marketplace.
You can find the statistics here. For 2006-2008 (projected), the percentage of on-demand, short run, and other unclassifieds went from 7% of the total to 30% to 51%. It's not given how many of these are from self-publishers, but with the explosive success of Internet sites like Lulu.com, it seems reasonable that this is where the growth is from: the release of pent-up demand for publication by writers frustrated by the traditional barriers. In the new ecology of literature, reproduction is perfect and as easy as the click of a mouse. But it's only free online. Paper books still require printing and distribution. It seems inevitable that the future of publishing is mostly self-publishing, and that it's mostly distributed in electronic form. 329,000,000 blog posts a year represents something of a trend, I propose.

Imagine for a moment where all of this is probably headed. Picture in your mind that virtually all printed works become electronic and freely available almost as soon as they are published, like music in effect is nowadays. How could an author become successful in this brave new whirl? Where's the payoff? Musical artists can at least hold concerts to earn money for live performances. It doesn't seem likely that authors can demand the same kind of treatment. Successful blogs can make money from advertisements, but it's hard to imagine that a reading public would put up with ads placed in the margins of their mystery novels. Someone could easily strip them out and distribute the plain copy.

This may be a catastrophe for publishers, who will fight it with digital rights management "solutions" and such. Until the Internet, publishing was about publishers--they set the standards and reaped the profits. If you poke around writers' forums, the first thing you see is "don't try to make a living doing this." No where is the pernicious effect of this bottleneck control more evident than in academic publishing. This too is eroding with pre-print services like arxiv.org, which have minimal gate-keeping and instant electronic publishing. For academics, I think there will be the dawning realization that good research could be directly building their own brands rather than the those of gatekeeper journals, who take freely donated product, slap their copyright on it, and mark it up so libraries have to pay through the nose for it.

For the first time in history, publishing is being controlled by writers directly. This is a good thing. The reasons to self-publish are as many as there are selves. From personal, limited distribution "works" like family photos and history, to advertisements on craigslist, to micro-publishing like Twitter and text messaging from phones (there are hundreds of billions of SMS texts sent in North America in a year), all the way up to those authors who dream of becoming the next J. K. Rowling. The self has been unleashed.

But suppose that this unknotting of the nous is not enough for you. What if you really do want to make a living at writing novels or non-fiction works? It may be that the answer is that you can't. In this literary apocalypse, the masses demand free content, change all the laws to make it official, and a tragedy of the commons results: commercial long-form literature survives only in the medium of flickering screens in the intervals between advertisements for fizzy drinks.

I'm not so pessimistic, however. We can list some possible strategies for the self-publisher. All of them require a personalized presence on the web somewhere you can call home. The only uniqueness left on the web is the domain name--you really can own that. That and your personal identity have to be the focus; everything else can be copied. Once you get traffic to your electronic address you begin to have options like advertisement and product endorsement, reputation for expertise in an area, invitations to participate in other projects, and so on. Authority, like fame or money, grows exponentially when well tended.

Possible strategies include:
  • Serialization. This is an old trick to milk content for what it's worth. Chop up your content into pieces and dole it out to the masses. Make a podcast while you're at it, so they can listen to your ideas while commuting.

  • Added-value. A place for readers to comment about your work. A blog to keep fresh events at the top of the page. Give readers a reason to keep coming back. Host fan-fiction. Index everything useful that has to do with your topic. Make your site the place to go for anything to do with X, whatever X is. The possibilities are endless here, from Internet game spin-offs to greeting cards.

  • Go on the road. If you have the speaking skills, go sell your personality along with your intellectual product. Richard Dawkins isn't famous because he sold some books--it's the ideas in the books and the way he presents them that draws crowds when he speaks.

  • Hybrid approaches. Add value to your product online, but keep back major works to sell in the form of physical on-demand books. I see this as an intermediate strategy as the probable transition to electronic delivery evolves.
You don't have time for all this. There's no way you can do all of this--or even a small part of it--and still continue to write. That's why the surviving publishers will morph into companies that provide services to authors (instead of the other way around). Web maintenance, editing, advertising, and all manner of creative value-added options will exist on a buffet to entice talented authors to share their glory (or at least profits) with publishers. The music industry is going through something like this now. Textbook publishers have discovered the trick too, with quite good online supplementary materials--restricted of course to those who buy the text. As the price of the text tends to zero, the add-ons will become the main business.

In case this vision leaves the budding superstar author in despair, I can offer one more strategy: the get rich quick scheme for writing in the 21st century. Like all such plans, it's a nice dream, but in practice you might have a better chance with the lotto.
Get rich quick: Write your novel, get it edited, and release it electronically on a cheapo website branded to the book. When it becomes a major success, have the screenplay ready. Sell it for millions to a major company and hang on to merchandising rights. You might want to think about how your characters would look on the side of a Biggee drink at Burger House when you describe them in your novel.
The author died in the 20th century, but he's back with a vengeance. Maybe this explains the resurgence of vampire and zombie stories in pop culture. The whole point of self-publishing is the focus on self. The self is the motivating factor for creating the work, but also is half of the relationship between reader and writer. The responsibility of the scribe is to find relevance without being told what it is by a publisher. When an anonymous guitar player created a sensation on youtube--the most downloaded video yet--people wanted to know who it was. The New York Times wrote a piece about tracking him down. Finding relevance may not be easy, but it will get noticed when it happens.

The payoff for the author is potentially geometric growth. If you make a great product year after year and become known for it, you'll accumulate pointers in the form of hyperlinks and critical opinion. Tim Ferriss has this down to a science. For most of us, this daily grinding out of product, attention to the business of getting it in front of people, and the wait for the eventual payoff, is too much. It is now and will always be a particular breed who can make a living solely by writing in long form.

With the barriers and capriciousness of the middleman going or gone, great heaping piles of junk no one will ever read are being produced along with the nuggets of good stuff. But that's okay because it's all indexed, tagged, and filtered through formal and informal social networks that feed us the good stuff. Yesterday I glanced at my customized feed on reddit.com and discovered a new graphic novel about Bertrand Russell and the history of logic. Now that's a niche product, and it found me effortlessly. The problem is not that there's so much junk out there to filter through; the problem is that there's too much good stuff to ever read in a lifetime.

The future of self will be more and more closely related to self-publishing. I've pushed this idea (here) as a framework for creating student portfolios early on, to provide training and infrastructure for creating a rich resume as their professional life unfolds. Employers won't just look at Facebook before hiring, they'll want to see evidence of a rich production of professional output and online reputation in the field. Academics already take this for granted, and many create websites with links to published papers, blogs, and other added value. Self-publishing becomes a means to establish an identity and reveal it to the world. Sooner or later, we'll drop the "self" part as implied, and go back to just publishing. Or perhaps, as I suggested at the beginning of the article, it will be the other way around: the self will imply published.

Note: this post was entered in a contest at Backword Books to write about self-publishing. Nicely self-referential, no?

Update: After work I read a bit more of Logicomix and discovered that Principia Mathematica was self-published.

Monday, September 28, 2009

Predicting Success

Nature abhors a vacuum it's said. My daughter showed me the other day how her science class used this "principle" to determine the amount of oxygen in the air by using a candle and a test tube in water to measure before and after. Of course, it's isn't really abhorrence but air pressure that makes vacuum a chore to create down here where we live. But maybe economics really does abhor a passed-over opportunity. As in the old joke where one economist says "hey, there's a hundred dollar bill laying there on the ground," to which the other replies "can't be so--someone would have picked it up."

I've argued for a while that the low predictive validity of GPA + SAT creates market opportunities for those willing to experiment with other demonstrations of achievement. Here's a list of previous posts related to the topic:
In Malcolm Gladwell's Outliers, he has the following observation about the international math and science test called TIMSS . He notes that the test is accompanied by a 120-question survey, which many students don't complete. In his words:
Now, here's the interesting part. As it turns out, the average number of items answered on that questionnaire varies from country to country. It is possible, in fact, to rank all the participating countries according to haw many items their students answer on the questionnaire. Now, what do you think happens if you compare the questionnaire rankings with the math rankings on the TIMSS? They are exactly the same. (pg. 247)
He concludes a page later that "We should be able to predict which countries are best at math simply by looking at which national cultures place the highest emphasis on effort and hard work."

One could certainly ask for better analysis--why not correlate by student the number of survey items completed against the math score, rather than aggregating by country? But the sentiment is certainly the same expressed in noncognitive literature: there's more to success than the ability to do mental gymnastics.

In InsideHigherEd today there was an article "Next Stages in Testing Debate" talking about institutions that de-emphasize SAT in admissions decisions:
[A] common idea was that decreasing reliance on the SAT does not mean any loss of academic rigor and can in fact lead to the creation of classes that do better academically (and are more diverse).
This may require a rethink and additional training of admissions staff:
That report said that for too many admissions officers, the only training they receive on the use of testing may come from the technical training provided by testing companies, entities that have a vested interest in the continued use of testing.
Some of the other types of accomplishments sought by admissions officers are evidence of creativity, practical skills, wisdom about how to promote the common good (Tufts), and essays (George Mason). This idea is something that seems to be blooming. As Mr. Gladwell might say, it's blinking toward an outlying tipping point.

Evidence of this arrived in my in-box the other day: a forwarded email from the Law School Admisssion Council (LSAC) with the following news:
LSAC has funded research on noncognitive skills for some time. A study funded by LSAC—Identification, Development, and Validation of Predictors for Successful Lawyering by Marjorie Shultz and Sheldon Zedeck—identified 26 noncognitive factors that make for successful lawyering. The study included suggestions for assessments that might measure those factors prior to admission to law school.
I hadn't heard of Shultz and Zedeck, so I scurried off to the tubes that comprise the Internets to find out more. You can find the whole 100-page report and more here. The executive summary has an imposing, lawyerly warning on the front page:
NOT TO BE USED FOR COMMERCIAL PURPOSES NOT TO BE DISTRIBUTED, COPIED, OR QUOTED WITHOUT PERMISSION OF AUTHORS
I suppose this implies an argument that fair use somehow doesn't apply to this work. In any event, since I've obviously already violated the terms with the quote above, I may as well proceed.

Rather than focus on something easy like just predicting law school GPA, the researchers actually assessed job performance and got LSAT and law school performance data. Then they threw a bunch of tests at the problem, like Hogan Personality Inventory, Hogan Development Survey, Motives, Values, Preferences Inventory, and Self Monitoring Scale, a Situational Judgment Test, and Biographical Information Inventory. This on a sample of more than 1100 subjects. I think the word I'm looking for is "wow."

The results found successful indicators of effectiveness (per their definition) that seemed to be assessing independent characteristics, adding dimensionality to the standard predictors. In fact, the LSAT didn't seem to predict success at all. Go look at the executive summary for details--it's not very long.

All in all, this seems to be a solid study that shows that noncognitives are important--perhaps better--predictors of professional effectiveness than the traditional cognitive ones.

Other noncog stories in the news:
The College Board is even getting into the act. A 2004 publication talks about "individualized review" that includes factors beyond GPA and test. I am led to understand that they have a big project underway now on noncogs, but I can't find the website.

Okay, remember the vacuum? What if we managed to identify these noncognitive variables and start to use them? The LSAC folks outline what can happen next:
A major concern about developing an assessment for noncognitive factors is the possibility that the test would be so coachable that its results would be unreliable in the high-stakes environment of law school admissions.
I argued here (Zog's lemma), here, and here that any imperfect predictor invites error inflation for economic gain, but it doesn't take a genius to see that if checking "I'm a hard worker" gets me more financial aid, I'll be more inclined to overestimate my industriousness. This is a game theory problem with no solution that's likely to be mass marketed. Imagine if the assessment of prospective students had to be done laboriously by hand by highly trained admissions staff, instead of relying on a convenient test that cranks out a one-dimensional predictor. I'm not sure that's a bad thing.

Saturday, September 26, 2009

Is General Education Worth It?

In order to be an academic professional, it is almost required to have an opinion about the perfect general education curriculum. This could be prompted by an actual review of the institution's curriculum or just lunch conversation. I remember a fiery argument about the foreign language requirement taking place between stabs at a taco salad. But in the midst of the debate about the trivium and quadrivium and great books and (occasionally) transferability you seldom hear the call to chuck the whole idea. True, some in the professional colleges may be thinking it, but they can hardly be vocal about it at a liberal arts institution because they're not the "experts."

What exactly gen ed is, varies greatly from place to place. Take a look at WhatWillTheyLearn.org for a school-by-school comparison with "grades" for meeting certain (rather ideological) criteria. It's interesting that the top liberal arts schools tend to have very loose requirements--electives, almost.

The question "is it worth it?" has to be asked. In round numbers, the gen ed curriculum comprises a third of a bachelor's degree. At today's prices, that can easily amount to a $24,000 investment on the part of students, parents, and subsidizers*.

Note that I'm not talking about liberal arts majors, which have clear economic value, judging from a Wall Street Journal report. Below I've sorted the data by the percent increase in salary from start to mid-career, as an indication of the personal growth potential of the training--life-long learning, if you will. The top two are liberal arts degrees, and history, art history, and English are all above 70%--my artificial cut-off.

The question isn't whether English or math or philosophy is worth studying, but whether the broad and necessarily shallow treatment of a gen ed curriculum does any good. Despite the mighty weight of tradition, should we not make our null hypothesis that the price of a new Camry is not assumed to be a worthwhile investment? Let me put it another way. Suppose we were to now try to make the case that the gen ed curriculum isn't long enough--that we should double it to around 80 credit hours. There would surely be a high standard of proof asked of us.

The AAC&U calls itself "the leading national association committed to advancing and improving liberal education for all students." In practice, they have a lot of publications and conferences on the topic of liberal arts requirements, including the LEAP intitiative--a "panel of experts" approach to guide choices in constructing general education. One motivating factor was what employers said they wanted. You can find the list here. The top two are "science and technology" and "teamwork skills in diverse groups," followed by thinking and communications skills (including "applied knowledge") and intercultural knowledge.

Focusing on science and technology, the Pew Research Center published a survey here that summarizes results of a basic science quiz. It looks like high-school material to me (e.g. are electrons smaller than atoms?), but college grads did significantly better: 57% got at least 10 of the 12 questions correct, compared to 17% for high school or less. But is this cause and effect? Or is it simply that kids who go to college are a lot more likely to know the answers to basic science questions?

Thomas R. Cech makes the argument in "Science at Liberal Arts Colleges: A Better Education?" that liberal arts college do a great job of preparing students for a career in science and engineering. He gives statistics on PhDs/100 grads, for example. But again this is a focus on majors, not on the liberal arts as embodied in a general education curriculum.

Search for worth. I poked around on the AAC&U website looking for "worth of general education," and found only this. It's clearly a transcript or speaking notes for a session that should be on this page, but I can't find it to attribute properly; there's no author listed. The speaker gives two compatible reasons for gen ed to exist:
General education can be “foundational,” intended to build a beginning
knowledge base and competence, often in a few selected disciplines, but
potentially it could include a variety of intellectual skills and personal
responsibilities.

A general education program could have an “interdisciplinary or integrative
purpose.” Such a program would seek to foster connections among knowledge
areas and other kinds of learning, and help students build understandings and
skills as they learn to employ multiple perspectives to address issues and solve
problems both in and out of the classroom.
There are plenty of other possible justifications, including the "broad knowledge" or cocktail party argument--what if our graduates are at a swank party and don't know who Plato was? The horror. More and more, you see civic engagement on the lists too.

The excerpted talk leads to an exercise of goal development and then assessment discussions, but no resolution to the question of value of the whole endeavor. I suppose that the assumption is that once you have a stated purpose, it's enough. I think it's fair to say that assessments are nowhere near trustworthy enough to tell us if these purposes are being accomplished.

The question of worth may be being answered another way: by liberal arts colleges that change their mission. David W. Breneman wrote in 1990 in "Are We Losing our Liberal Arts Colleges?" that:
My conclusion [...] is that the liberal arts college is in much greater peril than I thought it was, but not because it is failing financially and closing its doors.

Instead, it is surviving, but only by changing and becoming something else--for want of a better term, a small professional college.
An updated discussion of this phenomenon can be found in "The Case of the Disappearing Liberal Arts College" at InsideHigherEd.com. Are market forces deprecating the traditional role of distribution requirements along with the lofty goals that traditional liberal arts colleges advertise?

The weightiest charge I think we can levy at gen ed is that it's broad but too shallow. We can argue that students aren't likely to retain much from a course or two on history or philosophy or chemistry. That after three or four years, the difference we have made is minimal. There is the romantic notion that a student who thought she wanted to be an engineer will get turned on to poetry because of a required distribution class, and change her whole life. But balancing those just-so tales is the two years and tens of thousands of dollars on the other side of the equation, which get paid whether or not a student has an epiphany.

In thinking back to my own freshman year, I guess I can sum up my liberal arts exposure (from SIU--not a liberal arts school) as: some courses were about as good as reading a decent book on the subject. Some were bad, and at least one turned out to be useful: my German class, because I married a German. All in all, I don't think the gen ed requirements did much except introduce me to college. The major program, on the other hand, set the course for the rest of my life.

If we were so bold as to assume for the sake of argument that the gen ed requirements are ultimately not worth the cost in time and money, then we could toss another third (electives) of the curriculum too, and reduce cost of a bachelor's by nearly two-thirds.

A hue and outcry from the academy against such speculation would include the charge that this reduces a program to merely vocational. To be sure, there are other arguments, such as need to broaden the mind beyond a narrow subject, create life-long learners, grow good thinkers and communicators, and so on. Let us be generous, then and assume that we only cut the curriculum in half. While we're fantasizing, we may as well dress up the program in the styles that are in vogue. Cast the searchlight of your imagination upon this scenario:
We'll call it minimum U. It's purely online, and offers programs in a tightly defined group of specialties that have solid career paths. Let's say math and computer science field, to be definite. You could use engineering or business instead. Before you scream "vocational," I should say that you could use philosophy as the subject too. It's certainly valuable, as demonstrated by the figures in the table above--you just have to figure out how to market it so you get students to enroll.

Min U has no gen ed requirements in the classical sense--just prerequisites that often appear in the "core" of such requirements, like basic math and communications skills. Students work at their own pace in learning communities toward milestones. Along the way they create portfolios of work that they own. Most students have jobs and take classes from home. The university fosters internships and connections between students and the workplace in the context of the discipline.

Students grow their way through a structured curriculum in their chosen area of study. The model of achievement in the big picture is:
  1. Knowing underlying facts
  2. Application of theory
  3. Creation of new theory
There's no reason that thinking and communications skills couldn't be taught and assessed within this framework. Teamwork too, and perhaps you could shoehorn in global studies, depending on the degree. I think it's easier to do this during the dedicated development of a specialized knowledge than it is in a survey course.

Students could graduate in as little as two years, but more commonly because of work schedules would take four. When they're finished they have a solid understanding of both the theoretical and practical knowledge in their field. They have a portfolio and a resume and connections within the field.

Let me close with the following thought: the attitude that a student brings to coursework matters a great deal. I would advance the notion that students are more engaged with a subject they have chosen to study than they are with stuff foisted upon them "for their own good." By harnessing that idea, one could remake a four-year degree into something that produces a more highly trained specialist, but without giving up the best parts of what general education is supposed to deliver.


*Figuring around 28K tuition at a mid-tier liberal arts school, less 34% discount = 18K, times one third of the curriculum times four years = 24K

Wednesday, September 23, 2009

Evolutionary Thinking

Any reader of my blog knows how I advertise the evolutionary approach to solving hard problems, but what does that mean? I have a prime example to trot out, something I came across on Reddit yesterday. The article is "The story of the Gömböc," which may not sound promising, but trust me. I dare not spoil this tale of discovery by condensing it into some trivial didactic--it's worth reading it its entirety. Commentary after the break.



This story has potential for all kinds of reflection on the practice of discovery. One could mention the noncognitives--trying every darned thing one could think of without giving up, for example, the willingness to be wrong and learn from it (accurate self-reflection). Or we could talk about the concept of critical thinking as iterating loops of analysis (testing a theory or evidence) versus creativity (finding theory or evidence). It's a marvelous example of critical thinking.

But I'd rather muse about what practical lessons can be drawn about the evolutionary approach. How can you actually do it? Suppose you have a tough problem at hand. How to assess higher order thinking skills in a discipline or how to get better evaluations of teaching effectiveness, or how to redesign general education? What evolutionary principles can potentially help find a solution? I have a few to suggest, and I'm sure you can think of your own (drop a comment if you do!).

First, maybe it's already been solved. If the problem has been around for a while, it's a good bet someone else has thought of it. I'm working on a research problem using a computational model for living things, and ran into a communications problem that looks like the sort of thing NASA would have to deal with. So I emailed yesterday to see if someone has already solved it. The internet is obviously good for that sort of thing. I don't know how many times I've wished I had some simple software tool, and two minutes later discovered that someone had created it and released it into public domain.

Well, if it hasn't been solved satisfactorily, what next? There is no magic evolutionary wand that makes the work go away. Think of picking over 10,000 stones on the beach looking for the right one. But the process can be described very easily.

First, you have to know what you want because you have to be able to identify it when you find it. That is, you don't need to know the form of the solution to your problem (otherwise you already have the answer), but you do need to be able to recognize it as a solution. You may not know off the top of your head what the prime factors of 1,010,299 are, but you can easily verify them once you're told (911 x 1109). This is the analytical part. In natural selection this is done by survival and reproduction. Critters that don't pass their genes on aren't solutions to the problem. For something fuzzy like identifying a good method of evaluating teaching, this is going to be hard to come to grips with. But setting out looking for a solution without knowing how to tell if you find one is a waste of time.

So you could, for example, decide that faculty confidence is the most important thing for teaching evaluations. Then the challenge would be to come up with some way to assess that--probably involving asking them what they thought of this or that method. If this sounds political, it's because it's a problem laced with politics.

With a tool for evolutionary fitness in hand, now you have to find a way to generate potential solutions. This is the creative part. My recent post on brainstorming quotes research saying this step is best not done in groups. But a group is good at weeding through the ideas. In the scenario I described (evaluating teaching), if faculty confidence is the key then having a test group to try out ideas against would be useful--to weed out ones they don't like. I think it's obvious that this weeding group should be different from the idea-production group; people are often too attached to their own ideas to be objective. So two groups: a creative one for idea production. They may or may not have to meet as a group. Then an analytical group for weeding out non-solutions. They have to be very clear as to what a solution looks like. Add a bit of salt and iterate and you have an evolutionary process.

In a nutshell, it sounds very simple. Generate lots of ideas and be ruthless about weeding them out according to your clearly-defined measure of success. Putting that into practice is complicated, however, and some level of formalization could help. Good leadership is a must.

Think about all the committee meetings you've sat in that wrestled with tough problems like these, but without a plan or clear notion of success, and almost certainly without a wall between idea production and idea deletion. How much time was wasted?

No system is perfect, and there are game-theory short circuits to the method I've described too. Without good leadership, the group responsible for judging success (killing bad ideas, keeping good ones) can have too great an influence on guiding where the investigation leads. The conversation can go like this (C = creative group, A = analytic (weeding) group):
C: Here's our ideas so far.
A: Nope. They all stink.
C: (Frustrated) Then what DO you want?
A: Well if you come back with this or that, we might be interested.
This becomes the equivalent of "teaching to the test," and subverts the process. On the other hand, constructive feedback is good:
C: What did you think of idea X?
A: It comes close--but the committee saw Y as a problem. Is it possible to create some variations?
The difference is subtle.

The evolutionary process for decision-making I've described above is theoretical and idealized. In practice it will always involve compromises. Deadlines will pressure incomplete solutions. There may not BE a solution, or it may be impractical. So treat this as a tool in your tool bag and not a panecea for all big problems. After all, natural selection got some things wrong too--I don't have the topological properties needed to stay upright when I fall asleep in long useless meetings, a useful trait indeed (gives me an idea for a comic strip).

Friday, September 18, 2009

Complexity and Reboots

One of two stacks of books on my nightstand. The thin volume on top is on simplicity. Under it is a thick textbook on complexity theory.

Complexity is one of those really profound ideas that is surprisingly simple to access. The basic idea is that the complexity of some set of data is the size of the smallest description. So a one with a million zeros is not very complex because you can write it as 10^1000000. Or 10^(10^6), which shortens it by one character. This idea wouldn't work if all methods of description weren't somehow interchangeable. We're really talking about computer-code-like descriptions, and the argument goes like this: Any computer language that has certain basic functionality can be used to create an interpreter for any other kind of language. So we can switch between one and the other by paying the cost of this translation. It is in this sense that descriptions are language-invariant, and so there is something like an absolute measure of complexity in this formal sense.

It's a powerful idea, even in mundane affairs. I'm a simplicity hawk when it comes to creating new bureaucracy, for example, or creating a web interface. Most things end up with more whistles and bells than are good for them. And the more complex a system is, the more unexpected its behavior can be. This is probably why evolution hasn't invested a lot of "effort" in making organisms live forever.

Ever wonder about that? Why do our bodies suffer senescence? Why didn't we evolve with robust repair mechanisms so that we could go on reproducing ad infinitum? Think of all the effort it requires biologically to start all over again with a baby and create a new specimen capable of reproduction. There must be some good reason.

It's like a reboot. Running computers, unless they're running perfectly stable software, tend to accumulate problems in the machine's state. As I'm sure you've experienced, this can cause the thing to blue-screen and freeze. Rebooting eliminates the accumulated complexity. Same thing with biological reproduction, I assume. I imagine there's a limiting factor that works like this: any self-repair mechanism incurs a complexity cost. That is, it's easier to build a system that can't self-repair than one that can. So you have to add complexity in order to reduce it. At some point, it's self-defeating to try to add more repair ability because it isn't capable of recovering its own cost. I have no idea if this is true, but it's an amusing theory.

What does this have to do with higher education? More generally, it has a lot to do with self-governance. Faculty senates, state and federal legislative bodies, and so forth are run by systems of rules, at least in theory. Over time these accumulate complexity as exceptions are created and new conditions included: think of the tax code. I would hypothesize that it would be a good thing to build a reboot procedure into such things.

Consider a faculty senate, which in a moment of singular passion, changes its bylaws so that all future motions can only be carried with every voting member present and voting for the motion. This is tantamount to self-destruction. On the other hand, the rules and committees could proliferate to such a degree that processes slowed to a crawl. I almost wrote "glacial speeds" but the ice mountains are moving faster these days than most committees I observe.

Tuesday, September 15, 2009

Survival Strategies

Speculation is rife that the brick and mortarboard, chalk and talk higher education establishment has reached senescence. The particular charge is, that Edusaurus rex has grown thick-limbed on the richly oxygenated air of heavy government subsidies and cannot survive the change in economic conditions, which now favor the fleet-footed Cyberdon meme, which potentially can deliver the same product much more cheaply and conveniently. Let me elaborate on this cataclysm with a (cue crashing thunder)...

Worst Case Scenario. Imagine an unholy union, birthed on a Friday the thirteenth midnight in a graveyard, between University of Phoenix and Wal-Mart. Because continued exposure to "that which cannot be named" may have unpleasant physiological effects, let us call it Yoyo U to disguise this chimera in polite conversation*, or more formally as Cyberdon yoyo.

The trough of state and federal money for higher education is deep and broad and as untouchable as a third rail. The bricks and mortarboard institutions have gobbled it up, burped, and said "more please!" before even using a napkin. To get into the feeding pen and belly-up to the rich public broth, one must first pass through the baleful gaze of the accreditors and weather their harsh words and blows from compliance prods. Imagine though that Cyberdon yoyo passes this gate, slips in unannounced, and proceeds to feed. And reproduce.

As a rule of thumb, provided that you can talk your students into taking out loans, you can count on about $10K per student per year without them having to dip much into their own pockets. In fact, the way the loans are set up, they can sometimes take out extra money and buy a bass boat too. I've actually heard "College Eks is paying me to go to school." But yoyo is bred for value, and manages to wholesale price its educational product at subsidy level: any student--particularly PELL-eligible students with high enough GPAs to qualify for some state-based merit aid--can attend nearly for free. To be sure, this requires a very clever financial aid digestive system on the part of the Cyberdon, but this is a worst-case scenario, remember.

Of the 170 or so small private liberal arts institutions, I suspect many of them depend on bread and butter students like the ones described: some need, good credentials. More generally, any institution that depends on state and federal aid would see fierce new competitor gobbling up part of their PELL and potato stew as true discount education comes to the masses.

At first, Edusaurus laughs at the scrawny little unattractive furry beast. It makes jokes about the quality of the product, and mocks the labor standards. But by and by, the weather turns chilly, and there are suddenly a LOT of little yoyos running around, gobbling up all the best parts of the stew.

Eventually, because of shear scale, Cyberdon yoyo can offer services that Edusaurus can't dream of. Free textbooks for class, and high quality ones too. A massive online library. A return policy that can't be beat. Gift cards for a PhD. True, the selection may not be as great as one would like: there are no Masters of Applied Ambiguities degrees or Historiogragraphy of Post-Colonial Xenobiology courses, but you can find what you need, and it's cheap. And you can get a degree while working full time! Heck, all you need is a smart phone--you can take classes while driving down the interstate!

What happens to the small colleges that then have their customers staying away in droves? I think it's safe to say that if this scenario plays out, many venerable institutions will face unplanned obsolescence. The ones that survive this K-T boundary, like the alligator, need to find an undisturbed ecological niche.

There are still bookstores. Amazon.com and bn.com didn't kill the traditional bookstore. You'll notice that you pay a lot more in Borders or Barnes and Noble stores than you do online, but you can buy coffee and hobnob with other customers. It's a social activity to shop for books in person. The bandwidth is higher in real life (see my last post), which allows for a richer experience, and some people will pay a premium for it. That gives us our main question to drive survival of the small privates.

What's special about being here? Don't look at things you'd wish to be different as much as what you do well. Close to the beach? How can you take advantage of that? In a major city? What unique benefit can you leverage for your students? What is it that's worth paying a premium for?

A perhaps less palatable question is this: how can you subsidize your traditional program with cheap-to-operate secondaries like online and evening school? The latter has a built-in advantage over yoyo because of locality--if it's convenient and customer service is excellent. Online programs probably need to be limited to a very few, very good ones--something you can specialize in . Maybe the Masters of Applied Ambiguities is your niche.

If it's liberal arts you're into, it's a lousy brand. Fix it. I'm not saying the product is bad, but the marketing generally is. Answer the question: what does liberal arts mean in terms of the graduate's career? See my post about the mismatch between expectations and reality here:
The bottom line is that we discovered that many of our students don't understand the product they're buying, and we don't understand them very well either. It's not the kind of thing one can slap a bandaid fix on, but will require a complete re-think of many institutional practices.
Get some young people involved. Youth may be wasted on the young, but only they understand exactly how it's being wasted. If your strategic planning group is all over 22, you have a problem.

In general, survival means always looking for an edge to increase the probability of making it one more year (details). This will become critical if the easy meals become harder to find. I heard someone say once that success is a big predictor of failure. The point was that complacency is not conducive to increasing those probabilities. At this point, I think there's plenty of opportunity to find a niche. But there are only so many niches, and to mix metaphors, the music will stop at some point leaving a bunch of Edusaurus species with growling tummies.

[Edit: I changed cybercodon to cyberdon as easier to say and arguably more accurate]

Update 10/8/2009: See "Served, Yes, But Well-Served?" from Inside Higher Ed. Quote:
The list published by the Student Lending Analytics blog last month jumped off the computer screen: Of the 10 colleges and universities whose students received the most Pell Grant funds in 2008-9, 7 were for-profit institutions.
The article goes on to speculate about the meaning of this, but one thing is clear: Cyberdon is through the gate. The big players in the online market aren't heavy discounters aiming for the best students--that's not where the easy money is to be made right now. Judging from the article (and looking at the Pell grant numbers), the opposite may be the case: find students who have trouble getting into bricks and mortarboard institutions because of their own qualifications or because of overcrowding at two-year schools. It's a kind of captive audience. Squeeze out all the subsidies and loans you can, and keep the price high. As long a students are willing to take out loans, there's no downward price pressure.

See also: "Scaling Higher Education" and "End of the World, Etc."

*I mean no disrespect to either institution, nor to the idea of low cost ed. I got the moniker like this: University of Walmart = U of W = U of (UU) = (UU) U, but You-You is easier to say as Yo Yo, hence Yo-Yo You. Logical, no?

Motivation, Outcomes, and Bandwidth

I suppose every generation creates its share of hideous neologisms and circumlocutions. For me "on a regular basis" (instead of "regularly"), is one of the worst. I made the mistake of telling my daughter how much I hate it, and now she uses it regularly just to annoy me. But "incentivize" must also rank up there the list of 20th century abominations. The idea is simple, and is probably linked to the recently debunked myth of markets that are perfectly efficient and people who always act in their own best economic interests. Paul Krugman describes this empirical enlightenment of economists vividly in his recent New York Times piece "How Did Economists Get It So Wrong?".

It reminds me of something I read a long time ago about an engineer doing research on the effect of a crowd on wireless transmission, beginning with: assume that a person is a one-meter diameter sphere of water... Assumptions and approximations have to be watched carefully.

I was reminded of "incentivize" a couple of days ago when I came across a TED talk by Dan Pink on the science of motivation. It's a 17 minute video you can see here. I will summarize some of his points here, but you might find it more interesting to watch the video first. The topic of the talk was motivation: what types work in what circumstances. This is an interesting topic in higher education because of various obvious reasons, including one you may not think of right off. More on that later.

Motivation isn't as simple as it seems, it seems. Consider the crazy things we do in the name of motivation. A car cuts us off in traffic and we may get angry, wave interesting gestures and honk at the driver. We might even call the police and report it if the behavior is egregious enough. Why are we doing that? At the bottom of it, I would posit that we are unconsciously trying to incentivize the driver not to do such things again through negative reinforcement. But in most cases, the odds that we will ever encounter this particular driver in that situation again are probably remote. (Consider how you might behave differently if it were your neighbor rather than a stranger, to see some of the complexities here.) So we are wasting our time at best, and possibly even acting against our own best interests. But such inclinations run deep. It's worth bringing their effects to the light of day.

We seem to have an instinct to incentivize (I'm going to get that word out of my system), even when it isn't likely to do any good. As pointed out above, our actions could make the situation worse. I assume that there are evolutionary reasons for these tendencies: a million years of living in social groups must have had some effect on our programming. There seems to be a general societal purpose to such actions (see this article, for example).

So now to Dan Pink's talk (spoilers ahead). He makes the same point forcefully, applied to the workplace: managers don't understand incentives and are mostly doing the wrong thing to motivate employees. Here are some quotes from the transcript.

Sam Glucksberg did a Candle Problem experiment to learn about the effect of incentives:
He gathered his participants. And he said, "I'm going to time you. How quickly you can solve this problem?" To one group he said, I'm going to time you to establish norms, averages for how long it typically takes someone to solve this sort of problem.

To the second group he offered rewards. He said, "If you're in the top 25 percent of the fastest times you get five dollars. If you're the fastest of everyone we're testing here today you get 20 dollars."
It took the second group three and a half minutes longer on average to solve the problem. This is counter-intuitive. As Dan Pink puts it:
You've got an incentive designed to sharpen thinking and accelerate creativity. And it does just the opposite. It dulls thinking and blocks creativity.
And most amazing is the claim:
This has been replicated over and over and over again, for nearly 40 years. These contingent motivators, if you do this, then you get that, work in some circumstances. But for a lot of tasks, they actually either don't work or, often, they do harm. This is one of the most robust findings in social science. And also one of the most ignored.
Further research, again using the Candle Problem, showed that incentives can positively affect performance, but only when the problem was simplified to a rote (I would say low-complexity analytical) task. The result reinforces the idea that the prospect of immediate reward (or punishment) causes us to reduce creative, conceptual approaches to problems in favor of direct obvious connections.

Since this is September, it's easy to make the leap to 9/11 as an example. In the early 1900s, the czarist Russian security organ had an imaginative idea: what if the terrorists (which were blooming everywhere) got it into their heads to crash an airplane into a building? I read about this in Orlando Figes' A People's Tragedy. Compare that creative idea to the actual response to the horrible actuality: a very narrow focus on preventing exactly the same thing from happening again. This isn't criticism--it's very natural and sensible--the point is that a big whomping motivation narrowed attention to what exactly the problem was seen to be in an obvious sense, not what it could be in the larger sense. If you read my last post about generalizing through recursion, you'll see what I mean. Here's a list of Bad Things That Can Happen, which have nothing to do with taking off your shoes before you board a plane:
  1. A near-earth asteroid could hit us
  2. The caldera at Yellowstone could blow
  3. Gene hacking of biological viruses becomes as common as computer-virus hacking, and some 18 year-old sets off a catastrophe (you do remember the 90s?)
  4. Nanotech goo takes over the world
  5. The oceans turn to acid as the climate heats up
  6. Environmental toxins are having epigenetic effects that will last generations
  7. A housing bubble could blow up the banking industry.
These are the sort of broad-spectrum dangers you're not likely to think about while your house is on fire. I think it's fair to summarize the findings Dan Pink describes as "incentives narrow focus."

If these findings are valid, much of the way management is done in business, including higher education, is wrong. In the video, Mr. Pink talks about some alternatives.

Relating this to education. If you've been in the classroom, you've probably been as frustrated as I have been by the question "is this going to be on the test?" To the mind of a "lifelong learner" this is entirely the wrong attitude. But it's easy to see from the perspective of motivational cause and effect that the incentives we apply would lead directly to that question. To use that awful word again, we incentivize students with grades. Why should we find it surprising that they tend to narrowly fixate on grades?

Dan Pink has created a consulting business out of this idea, where he pitches:
And to my mind, that new operating system for our businesses revolves around three elements: autonomy, mastery and purpose. Autonomy, the urge to direct our own lives. Mastery, the desire to get better and better at something that matters. Purpose, the yearning to do what we do in the service of something larger than ourselves.
Whether or not this is a formula that works, or is Utopian dream is unknown, but the ideas are certainly worth considering. Notice that of autonomy, mastery, and purpose, only the second one is a cognitive skill. The other two are affective, or noncognitive, or in jargon-free plain English: emotions.
I think there is untapped opportunity to try to engineer ways to motivate students more sensibly--to model and inculcate autonomy and purpose, and illuminate the role of zeal in creating mastery. That's all good, and I think there are opportunities especially for small liberal arts schools here. It's particularly ironic that incentivizing may actively hinder the teaching of critical thinking, which is supposed to be what grades-bound liberal arts colleges are suppose to be good at. But there's a bigger question.

"A Virtual Revolution is Brewing for Colleges" from The Washington Post is the latest article I've seen predicting the doom of traditional higher education. I blogged about this topic recently here. The argument is that for many subjects, education can happen conveniently and cheaply over the Internet, and that competition will drive the bricks and mortarboard model to ruin (except for the elite institutions that rely on deep pockets or have massive self-sustaining endowments).

What, aside from inertia, stands in the way of this transformation? I think one of the biggest problems for online education is the noncognitive load it places on the consumer: they have to be motivated to log in and do the work. They have to minimize the distractions of Facebook and a million other things while working on the computer. I don't have any statistics to back this up, so I may be completely wrong. But it seems to me that one current advantage of a residential school is the pervasive culture that comes with it. As imperfect as it is, there is social pull to come to class and not humiliate oneself by flunking every test. In the nearly anonymous hyperspace of online classes, I imagine that this is less so. In-person interactions are naturally more engaging that online ones. Is that really true? If so, how long will it remain true?

For the moment, I can't conceive that online teaching can approach the richness of a good professor's interactions with students in class, the dining hall, and in the office--the social engagement that includes mentoring and a kind of tribe-like kinship that comes from the circle of mutual acquaintances and shared experiences that play out in full-color, real-time, three-D.

In short, the bandwidth for real life ("rl" in cyberspeak, contrasting with, say, "vr" for virtual reality, or specifically "sl" for the online world Second Life) is still far superior to anything modems can deliver. But rather than leveraging this advantage, rl institutions waste most of the bandwidth. We're generally not engaging students on autonomy and purpose in the pursuit of mastery. We care about grades and bureaucracy and grants from the government.

Tentative conclusions. There may be a niche for second-tier institutions in the new education landscape to provide premium rl education, but only if they seriously address the engagement problem. George Kuh of the NSSE comes at this from another angle, and he actually does have some statistics. The point isn't just the survival of the traditional model, it's to provide a service that is superior to online education because of bandwidth and proximity: a million years of evolution has programmed us to live reasonably well together in social groups, and that should be taken advantage of.

De-emphasizing traditional grades is one step in that direction. Read my post about Western Governor's University to see a model for how that is already happening (online). But that's only part of it. A portfolio that a student can carry with them (and accumulate as a life-long resume) could contain evidence of not just subject mastery but also explicitly address noncognitive traits like purpose. Higher education has been allergic to "purpose" since it became largely secular. It's time to reconnect with the big "why" questions outside of a hermetically sealed philosophy course.

Online education isn't going to stand still, of course. Already there are very motivated people working on the problem of creating learning communities online. These visionaries think big, and for the most part, think "cheap" or "free" (search open education on this blog). Bandwidth will increase, rl will become more conflated with vr, and the next generations will perhaps feel at home in a warm LCD glow as they do in rl. For institutions frittering away their bandwidth advantage now, remember what happened to CDs. MP3s are generally inferior to CDs because the latter are compressed. But MP3 rule because bandwidth loses to convenience.

We might think of online education as a low-pass filter that employs only the deep end of the spectrum, like the telephone company only transmits a small range of frequencies when you talk in order to save money. What value is the high-frequency stuff? Can it be used to engage students in ways that vr can't approach? I don't know, but I think for the time being the answer is yes. I'll now take off my sackcloth, shave my beard, abandon the giant urn, and give up this prophetic conceit (I can't go to work like this) after one final prognostication:

It's a good time to be an energetic new college or university president with a creative, entrepreneurial spirit. There are opportunities. It's a bad time to be locked into the traditional model of private higher education that demands a high price and then blows the bandwidth.

Sunday, September 13, 2009

Recursive Critical Thinking

I was going to entitle this piece "critical thinking squared" as a cute way to imply critical thinking about critical thinking, but the imprecision bothered me. Squared means multiplication by self, and that's not the same as applying a process to itself. What multiplication means in this context isn't precise either, but you can possibly make a sense of it by considering a combinatorial factorization into dimensions like critical thinking = (analysis, creativity, communication). If we abbreviate critical thinking = CT, then CT2 might look like a matrix:


AnalysisCreativityCommunication
AnalysisAnalysisCreativity * AnalysisCommunication * Analysis
CreativityAnalysis * CreativityCreativity Communication * Creativity
CommunicationAnalysis * CommunicationCreativity * CommunicationCommunication

This assumes that each dimension is idempotent (meaning S*S = S), and that "*" is some way of combining the two dimensions. You still have to figure out what the Creativity * Analysis combination means, but at least you have a way to produce detail from the squaring operation. But this is all rather silly, and the reason I don't like the "CT squared" idea.

Here's a better way to think about it. If you talk about talking, that isn't (talking)2, but rather talking(talking), expressed here as a function that takes itself as an input.

This is even practical. For example, you could write a function get_loc(...) in the C programming language to take a function and return its address in memory. Then you could ask for get_loc(get_loc) to retrieve its own location. This sort of thing is called recursion, and it's a big deal in computer science. In common parlance, we might stick the prefix meta- in front of the concept to show that it's recursive, as in metacognition, which in the right context we justifiably call a noncognitive trait: the reflective practice of thinking about one's own thinking process. More on the relationship between CT and noncogs later. First, let's take a closer look at the role of recursion in problem solving.

In a few stolen moments this morning I was sipping an iced latte, enjoying a cool breeze, and trying to make some progress on a research project regarding survival in a certain abstract sense. You can read the actual paper here, but in a nutshell you can imagine an environment that poses survival challenges to organisms (all abstracted into computer language, which you can learn more about and download a simulator here). There are two different questions regarding the complexity of a given environment:
  1. What's the simplest thing that can survive the given conditions?
  2. What's the simplest recursive process that can find the solution to #1?
Here, recursion means that some process can be tried over and over again, feeding the output of the last iteration into the input of the next. Like natural selection, for example, acting recursively on the gene pool to blindly hone the fitness of the survivors.

Let me give a more down-to-earth example, as a simple "critical thinking" problem. Suppose Tatiana works all day in retail, and part of her job is to calculate sales reductions for coupons, sale prices, and so on. In addition she has to add sales tax to total amounts. For reasons known only to management, they skimped on her point of sale (cash register) and these functions are not included. So all day long she has to do stuff like:
  1. Find actual cost of an item by reducing for sale price, coupon, etc.
  2. Sum adjusted prices
  3. Calculate tax
  4. Add to get total
This is pretty tedious and prone to error, so there's an advantage to having the best possible way of doing this. In the context of my framing questions, we should ask:
  1. What's the best way of doing her job?
  2. How do we find it?
In practice, we have to address the second before the first. The second we might call a critical thinking exercise, requiring analytical and creative thought.

Tatiana need not be reflective. She probably already has a solution, and may not care that it's not optimal. I see this all the time in real stores. A clerk wants to reduce an item by 15%, say. Most often they multiply the price by 15% (.15) on a calculator, write down this number, and then subtract from the original price. Sometimes I tell them a quicker way to do it: just multiply the original price by .85 and you're done.

In a complex environment, you're never really finished with the "how do we find a better solution?" step. The question itself is recursive: "how do we find better ways of finding better solutions?" Mathematics is full of this sort of thing. You can follow the chain of meta-thought all the way up to something called category theory, where logic itself can be generalized (logic about logic).

So without really thinking about the definition, the idea of "critical thinking" can get you rapidly into the deep part of the pool. For me, it's very important to keep straight the difference between knowing a good solution to a problem and finding a good solution to a problem. I don't think this is as appreciated as it should be. The first is analytical/deductive and the second is creative/inductive, and they require very different preparations.

Think of an old-timey telephone switchboard operator, plugging and unplugging wires all day.
(photo courtesy of Wikipedia). There's a vast difference between knowing how to operate the switchboard and knowing how to design one or improve existing designs. The link between these two questions, as with the ones above is the "why" operator. Here's a possible chain of why-iterations for thinking about telephones:
  1. Q: Why are you moving those wires and plugs around?
    A: I'm operating a switchboard according to the procedures I've been trained in
  2. Q: Why is there a switchboard?
    A: To facilitate telephone calls.
  3. Q: Why are telephone calls useful?
    A: So people can communicate across distances.
  4. Q: Why do people need to communicate across distances?
    A: So they can lead better lives.
  5. Q: Why do people need to live better lives?
Each one of these increasingly general domains has its own problems and solutions. Solving the general ones can make the specific ones go away. If we keep asking why (see this related article), we end up with very general questions like "what problem does my existence solve?" and "why does anything exist?" I've tried to portray this recursion graphically below.
I'm assuming that the creation of knowledge is scientific (what art does is something different from what I mean here). I've quoted Bertrand Russell before on this point (here):
All definite knowledge--so I should contend--belongs to science; all dogma as to what surpasses definite knowledge belongs to theology. But between theology and science there is a No Man's Land, exposed to attack from both sides; this No Man's Land is philosophy.
Philosophy, theology, and other avenues of inquiry that remain immune to the scientific method are lumped together at the bottom of my graph. Wouldn't it be nice if we showed our students of critical thinking how this works? The unveiling of the breadth of meta-thought ought to be a stunning moment of realization for an undergraduate. Consider the following question and meta-question:
Why is the sky blue? (proposed answer here)
Why ask why?
From a scientific question we arguably leap directly over all of science to a philosophical one. Not only that, to my eyes it seems like a fixed point under meta-recursion. That is:
Why ask "why ask why?"? is the same as Why ask why?
Which would make the question the most profound one possible, I suppose. This could be a great starting point for a course on critical thinking. Note that I kind of cheated in my one-step leap to "why ask why?" Figuring out how is your meta-cognition homework. :-)

How does all this fit with existing literature on critical thinking? A colleague recently pointed me to the 1988 publication "The Delphi Report" on critical thinking, which arguably kicked off recent interest in the teaching and assessment of said skill. In the executive summary, which is linked to the title, a consensus statement reads:
We understand critical thinking to be purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or contextual considerations upon which that judgment is based.
This is quite different from the line I've taken above, isn't it? Maybe it's not even useful to use the term "critical thinking" for both. In the above definition, which is the kind usually used, it's described as an activity with a particular type of result. The activity itself is not described here other than purposeful, self-regulatory judgment. These are noncognitive descriptors, please note. In fact, the definition elaborates on this point with a vivid description of the thinker:
The ideal critical thinker is habitually inquisitive, well-informed, trustful of reason, open-minded, flexible, fairminded in evaluation, honest in facing personal biases, prudent in making judgments, willing to reconsider, clear about issues, orderly in complex matters, diligent in seeking relevant information, reasonable in the selection of criteria, focused in inquiry, and persistent in seeking results which are as precise as the subject and the circumstances of inquiry permit.
As far as I can tell, in practice most programs in critical thinking don't actually pay much attention to the "affective" or noncogitives listed. But that's not too unexpected--academics in general seems to be allergic to modeling and teaching personal attributes. I find it increasingly odd that this is so.

Into the meat of the executive summary we do find particular cognitive skills. You'll see these or similar ones in rubrics and learning taxidermy.
  1. interpretation
  2. analysis
  3. evaluation
  4. inference
  5. explanation
  6. self-regulation
I don't see how self-regulation is cognitive, but maybe it is in a metacognition sort of way (self-reflection). One of the findings is that evaluating one's own thinking is a way to improve it.

Although they don't get around to saying it this way, the authors note the importance of analytical skills:
Although the identification and analysis of CT skills transcend, in significant ways, specific subjects or disciplines, learning and applying these skills in many contexts requires domain-specific knowledge.
Knowing how to solve an urgent problem while sailing is different from solving one while flying a plane.

This debate is important. Lots of institutions put "critical thinking" on their to-do list. Good definitions should lead to good implementations and good assessments.

Although I appreciate the value of the work that's been done in traditional meta-critical thinking, I don't much like the result--those lists of vague terms like interpretation and evaluation. I know they can be used successfully, and they can probably produce a good curriculum and assessment. But to me they're just a disjoint collection of loosely-defined techniques that a committee came up with. There's no underlying structure or theory. No way to make sense of it all by asking the meta-question: why is critical thinking the way it's described in "The Delphi Report?" You can only answer that the experts agreed that this is what it should be. In Russell's description, this makes it dogmatic. And if critical thinking has any value at all, it's to question dogma, no? That makes it ironic, but we can't judge too harshly on this account. Even Karl Popper freely admitted that his system could not be proven to be self-consistent (i.e. prove that nothing is every really proven). You have to start somewhere.

It may just be my bias coming from a computer science/math background, where a more natural schema is the study of algorithms and complexity, but I'd like more than the opinion of a panel of experts. I want to keep asking why until there is a self-consistent answer, if possible. In the meantime, here's my recipe for teaching CT:
  1. Analytical thinking in various domains
  2. Creative thinking in domains where analytical thinking is well-developed by the student
  3. Training in recursive thought [ including CT(CT) ]
  4. Communications skills (otherwise, what's the point?)
  5. Noncognitives like intellectual honesty and open-mindedness and willing use of the above skills.
I know the first two can be effectively assessed; I've never tried to do the third because it just occurred to me today. The last one ought to be on everyone's to-do list.

Thursday, September 10, 2009

Abandon All Hope (comic)


[Previous comic]
Photo credit (first and third panels) thorinside (second panel): Scrunchface, (bottom left)Hamed Masoumi (bottom right) footloosity and Wikipedia. This comic may be distributed under Creative Commons.

Update: I cleaned up the spacing and changed the color on the top right panel.

Wednesday, September 09, 2009

Bad Meetings: Brainstorming

We've probably all been in a strategy session where we're advised to relax our minds and let a stream-of-consciousness flow of ideas emerge while someone writes this group product on a whiteboard. According to this "The Brainstorming Myth" in Business Strategy Review, that technique is ineffective. The abstract reads:
Research shows unequivocally that brainstorming groups produce fewer and poorer quality ideas than the same number of individuals working alone. Yet firms continue to use brainstorming as a technique for generating ideas. This continuing use of an ineffective technique is interesting psychologically. From a practical viewpoint, understanding why brainstorming is usually ineffective, and why people still do it, gives a basis for suggesting how managers can improve the way they use it.
I didn't pay $50 to read the whole article, but I did find a review of it on PSYBLOG here. The article notes that although brainstorming is supposed to foster creativity, "experiment after experiment has shown that people in brainstorming sessions produce fewer and lower quality ideas than those working alone." Three problems are cited:
  • Slacking off, letting the "rest of the group" do the work
  • Being afraid of being evaluated on the quality of one's ideas
  • Not being able to get one's ideas written down because others are talking
The real point is to see what can be done to improve the process. One answer is to use technology. The method suggested in the review article is similar to what I've been using lately. Rather than having an idea meeting I just email the group a link to an Etherpad document (screenshot below).
Etherpad is free for a public pad--not appropriate for sensitive documents, but very handy for everything else, and you can buy private access if you want. The pad excerpted above was a project list for the Dean's Council I created in order to allow us to assign ownership of tasks and set due dates. The different colors are different authors. This isn't precisely brainstorming, but it's similar. This technique is particularly suited for online brainstorming, however, because multiple people can be logged in at the same time. Because output is color-coded by participant, it's less easy to be present and not say anything (social pressure works for you). Also, you can revise, amend, or delete your ideas on the fly; you're not dependent on someone else to write them down for. Finally, all this happens in real time, with multiple people editing the document simultaneously. It's a very dynamic feeling to see a bunch of busy editors adding, revising, and commenting on each other's work. There's a sidebar chat window for the latter. Authors are identified in a color-coded index:

I think it's important to set a meeting time initially, rather than trying this asynchronously. I can't prove that scientifically, but it's certainly more fun to work when others are obviously busy with the task and you have a chance to chat with them about it. To keep track of my Etherpad conversations and other documents and links I use a mind map to organize and share the information (Mindmeister is shown here. The little arrows are hyperlinks, some to Etherpad docs):


On the other hand, meetings ARE good at killing bad ideas, according to the article.
[I]t emerges that groups do have a natural talent, which is the evaluation of ideas, rather than their creation. The conclusion of the psychological literature, therefore, is that people should be encouraged to generate ideas on their own and meetings should be used to evaluate these ideas.
Since an evolutionary approach to problem solving depends on both the creation of lots of good ideas and then the ruthless selection of only the best, this nicely complements the online brainstorming idea. An additional benefit is that the process of group decision-making bonds the group to the outcome by making it a social affair. This feeling of involvement (I'm interpolating here) is good for carrying forward the decision, especially if it has political implications. And in a university, what decisions don't have political implications?

For more about meetings, see my posts: Creating Meeting Discipline, Meeting Salad, Managing Meeting Entropy, The Two Meeting Personalities, The Secret Life of Committees

Also of interest from PSYBLOG: 10 Rules that Govern Groups