Monday, February 18, 2013

Teaching Critical Thinking

I just came across a 2007 article by Daniel T. Willingham "Critical Thinking: Why is it so hard to teach?" Critical thinking is very commonly found in lists of learning outcomes for general education or even at the institution level. In practice, it's very difficult to even define, let alone teach or assess. The article is a nice survey of the problem.

The approach I've taken in the past (with the FACS assessment) I've simplified 'critical thinking' into two types of reasoning that are easy to identify: deductive and inductive. Interestingly, this shows up in the article too, where the author describes the difference (in his mind) between critical and non-critical thinking:
For example, solving a complex but familiar physics problem by applying a multi-step algorithm isn’t critical thinking because you are really drawing on memory to solve the problem. But devising a new algorithm is critical thinking.
Applying a multi-step algorithm is deductive "follow-the-rules" thinking. He's excluding that from critical thinking per se. To my mind this is splitting hairs: one cannot find a clever chess move unless one knows the rules. We would probably agree that deductive thinking is absolutely prerequisite to critical thinking, and this point is made throughout the article, where it's included in "domain knowledge."

In the quote above, the creation of a new algorithm exemplifies critical thinking--this is precisely inductive thinking, a kind of inference.

Now I don't really believe that even the combination of deductive and inductive reasoning covers all of what people call 'critical thinking,' because it's too amorphous. It's interesting to consider how one might create a curriculum that focuses on 'critical' rather than 'thinking.' It could be a course on all the ways that people are commonly fooled, either by themselves or others. It would be easy enough to come up with a reading list.

Another alternative is to focus on the 'thinking' part first. This seems like a very worthy goal, and in retrospect it's striking that we don't seem to have a model of intelligence that we apply to teaching and learning. We have domain-specific tricks and rules, conventions and received wisdom, but we generally don't try to fit all those into a common framework, which we might call "general intelligence" as easily as "critical thinking." Usually it's the other way around--how do I embed some critical thinking object into  my calculus class? This latter method doesn't work very well because the assessments results (despite our desires) don't transfer easily from one subject to the next. This is the main point of the article linked at the top--domain-specific knowledge is very important to whatever "critical thinking" may be.

A Model for Thinking

I don't presume to have discovered the way thinking works, but it's reasonable to try to organize a framework for the purposes of approaching 'critical thinking' as an educational goal. The following one comes from a series of articles I wrote for the Institute for Ethics and Emerging Technologies (first, second, third), which all began with this article. The theme is how to address threats to the survival of intelligent systems, and it's informed by artificial intelligence research.

A schematic of the model is shown below.


We might think of this as a cycle of awareness, comprising perception, prediction,  motivation, and action. If these correspond to the whims of external reality, then we can reasonably be said to function intelligently.

The part we usually think of as intelligence is the top left box, but it has no usefulness on its own. It's a general purpose predictor that I'll refer to as an informational ontology. It works with language exclusively, just as a computer's CPU does, or the neurons in a our brains do (the "language" of transmitted nerve impulses). Languages have internal organization by some convention (syntax), and associations with the real world (semantics). The latter cannot exist solely as a cognitive element--it has to be hooked up to an input/output system. These are represented by the lower left and right blue boxes. The left one converts reality into language (usually very approximately), and the right one attempts to affect external reality by taking some action described in language.

All of these parts are goal-oriented, as driven by some preset motivation. All of this perfectly models the typical view of institutional effectiveness, by the way, except that the role of the ontology is minimized--which is why IE looks easy until you try to actually do it.

Each of these components is a useful point of analysis for teaching and learning.  Going around the figure from bottom left:

Measurement/Description When we encode physical reality into language, we do so selectively, depending on the bandwidth and motivation, and our ability to use the result in our ontology. At the beach, we could spend the entire day counting grains of sand, so as to get a better idea of how many there are, but we generally don't because we don't care to that level of precision. We do care that there's sand (the point of going to beach), but there are limits to how accurately we want to know.

Some language is precise (as in the sciences), and other sorts not (everyday speech, usually). What makes it usefully precise is not the expression of the language itself (e.g. I drank 13.439594859 oz of coffee this morning), but how reliably that information can be used to make predictions that we care about. This involves the whole cycle of awareness.

Example 1: According to wikipedia, the mass of a proton is 1.672621777×10e-27. This is a very precise bit of language that means something to physicists who work with protons. That is, they have an ontology within which to use this information in ways they care about. Most of us lack this understanding, and so come away with merely "protons weigh a very tiny amount."

Example 1: Your friend says to you "Whatever you do, don't ride in the car with Stanislav driving--he's a maniac!" Assuming you know the person in question, this might be information that you perceive as important enough to act on. The summary and implication in your friend's declaration constitutes the translation from physical reality into language in a way that is instantly usable in the predictive apparatus of the ontology. Assuming you care about life and limb, you may feel disinclined to carpool with Stanislav. On the other hand, if the speaker is someone whom you think exaggerates (this is part of your ontology), then you may discount this observation as not useful information.

The point of these examples is that description is closely tied with the other elements of awareness. This is why our ways of forming information through perception are very approximate. They're good enough for us to get what we want, but no better. (This is called Interface Theory.)

Here are some questions for our nascent critical thinkers:
  1. Where did the information come from? 
  2. Can it be reliably reproduced?
  3. What self-motivations are involved?
  4. What motivations do the information's source have?
  5. What is the ontology that the information is intended to be used in?
  6. How does using the information affect physical reality (as perceived by subsequent observations)?
Notice that these questions are also very applicable to any IE loop.

Question five is a very rich one because it asks us to compare what the provider of the information believes versus what we believe. Every one of us has our own unique ontology, comprising our uses of language, beliefs, and domain-specific language. If I say that "your horoscope predicts bad luck for you tomorrow," then you are being invited to adopt my ontology as your own. You essentially have to if you want to use the information provided. This represents a dilemma that we face constantly as social animals--which bits of ontology do we internalize as our own, and which do we reject? Which brings us to the 'critical' part of 'critical thinking.'

It's interesting that the discussion around critical thinking as an academic object focuses on the cognitive at the expense of the non-cognitive. But in fact, it's purely a question of motivation. I will believe in astrology if I want to, or I will not believe in it because I don't want to. The question is much more complicated than that, of course, because every part of the ontology is linked to every other part. I can't just take my whole system of beliefs and plop astrology down in the middle and then hook up all the pipes so it works again. For me personally, it would require significant rewiring of what I believe about cause and effect, so I'd have to subtract (stop believing some things) part of the ontology. But this, in turn, is only because I like my ontology to be logical. There's no a priori reason why we can't believe two incompatible ideas, other than we may prefer not to. In fact, there are inevitably countless contradictions in what we believe, owing to the fact that we have a jumble of motivations hacked together and presented to us by our evolutionary history.

Intelligence

The usefulness of intelligence lies in being able to predict the future (with or without our active involvement) in order to satisfy motivations. The way we maintain these informational ontologies is a dark mystery. We seem to be able to absorb facts and implications reasonably easily (Moravec's Paradox notwithstanding); we can't deduce nearly as quickly as a computer can, but we manage well enough. It's the inductive/creative process that's the real mystery, and there is a lot of theoretical work on that, trying to reproduce in machines what humans can do. Within this block are several rich topics to teach and assess:
  1. Domain-specific knowledge. This is what a lot of course content is about: facts and deductive rules and conventions of various disciplines, ways of thinking about particular subjects, so that we can predict specific kinds of events. This connects to epistemology when one adds doubt as an element of knowledge, which then leads to...
  2. Inference. How do we get from the specific to the general? At what point do we believe something? This links to philosophy, the scientific method, math and logic, computer science, neuroscience, and so on. Another connection is the role of creativity or random exploration in the process of discovering patterns. We might sum up the situation as "assumptions: you can't live with them, and you can't live without them." Because inference is a fancy word for guessing, it's particularly susceptible to influence from motivation. Superstition,  for example, is an application of inference (if I break a mirror, then I will have bad luck), and one's bias toward or away from this sort of believe comes from a motivational pay-off (e.g. a good feeling that comes from understanding and hence controlling the world).
  3. Meta-cognition. This is the business of improving our ontologies by weeding out things we don't like, or by making things work better by pruning or introducing better methods of (for example) inference. This is what Daniel Kahneman's book Thinking, Fast and Slow, is about. That book alone could be a semester-length course. Any educational treatment of critical thinking is about meta-cognition.
  4. Nominal versus real. Because we live in complex information-laden societies, we deal not just with physical reality but also with system reality. For more on these, refer to my IEET articles. One example will suffice: a system pronouncement of "guilt" in a trial may or  may not correspond to events in physical reality. At the point the verdict is announced, it becomes a system reality (what I call a nominal reality). The ontology of the system becomes a big part of our own personal version, and one could spend a long time sorting out what's real and what's nominal. For more on that topic, see this paper I wrote for a lit conference.
Motivation
Humans and the systems we build are very selective about what we want to know, and what we do with that knowledge. Understanding our own motivations and the that of others (theory of mind), and the ways these influence the cycle of perceive-predict-act, is essential in order to make accurate predictions. That is, intelligence has to take motivation into consideration. This invites a conversation about game theory, for example. The interpretation of critical thinking as the kind of thing that investigative reporters to, for example, must take motivations of sources into consideration as a matter of course.

In economics, motivation looks like a utility function to be optimized. Part of what makes humans so interesting is that we are laden with a hodge-podge of motivations courtesy of our genes and culture, and they are often contradictory (we can be afraid of a car crash, yet fall asleep at the wheel). The search for an 'ultimate' motivation has occupied our race for a long time, with no end in sight.
Here's a critical thinking problem: If motivations are like utility functions, they must be acted on in the context of some particular ontology, which goes out of date as we get smarter. How then are we to update motivations? A specific example would be physical pain--it's an old hardwired system that helped our ancestors survive, but it's a crude instrument, and leads to a lot of senseless suffering. The invention of pain-killers gives us a crude hack to cut the signal, but they have their own drawbacks. Wouldn't it be better to re-engineer the whole system? But we have to be motivated to do that. Now apply that principle generally. Do you see the problem?
Taking Action
This isn't usually thought of in connection with intelligence or critical thinking, but it's integral to the whole project. This is generally not the approach we take in formal education, where we implicitly assume that lectures and tests suffice to increase student abilities. Come to think of it, we don't even have a word for "active use of intelligence." Maybe 'street smarts' comes close, because of its association with 'real-world' rather than academic, but that's an unproductive distinction. I've heard military people call it the X-factor, which I take to mean a seamless connection between perception, prediction, and action (all tied to some underlying motivation, of course).

But of course the point of all this intelligence apparatus is to allow us to act for some purpose. There are great illustrations of this in Michael Lewis's book The Big Short, which show the struggle between hope and fear (motivations) in the analysis of the looming mortgage disaster, and the actions that resulted.

I've argued before (in "The End of Preparation," which is becoming a book) that technological and societal changes allow us to introduce meaningful action as pedagogy. It's the actual proof that someone has learned to think critically--if they act on it.

Being Critical
If some framework like the one described above can be used to examine intelligence in a curriculum, where exactly does the modifier come in? What's critical about critical thinking? Perhaps the simplest interpretation is that critique allows us to distinguish between two important cases (which may vary, but correspond to motivations). For example, in a jury trial, the question is whether or not to convict, based on the perceptions and analysis of the proceedings. It's these sorts of dichotomies--the aggravating fact that we can't take both paths in the wood--that makes intelligence necessary in the first place.

This general task is big business these days, in the form of machine learning, where distinguishing between a movie you will like and one you won't is called a classification problem. Netflix paid a million dollars to the winner of a contest to find a find a better classifier for assigning movie ratings.

It also makes a nice framework for teaching, and it's a common technique to set up a A vs. B problem and ask students to defend a position (and there's a whole library of resources set up to provide support for this kind of thing). In the abstract, these undoubtedly have some value it honing research and debate skills, but it seems to me that they would be more valuable when connect to real actions that a student might take. Is it worth my while to go to Washington to protest against X? Or go door-to-door to raise money for Y? Or invest my efforts in raising awareness about Z with my project? Maybe we need a new name for this: active critical thinking, perhaps.

So as educators, we are then left with the meta-question: is this worth doing?

Next: "Interfaces and Education" continues this line of thought.

9 comments:

  1. Hi Dave – First, what we know…The mind is generally domain (and heuristics) dependent – Doing IS context specific – “Practical” or “Participative” intelligence exists at the crossroads of optionality and decision-making. The act of comparing can have as much to do with unlearning as learning, so show me a framework about unlearning ☺

    The Nov/Dec issue of Scientific American Mind - Switching on Creativity (Snyder, Wood, Chi) would be of interest, check it out, and I hope the book turns out great.

    ReplyDelete
  2. Hey Dave;
    I think you could allow for another layer of analysis and complexity that would tend to muddle the cognitive picture. From Wittgenstein; much language based cognition is involved in "forms of life", something that is socially performed as much as it is thought. Most form of critical thinking tend to be based in a Kantian view of thinking that make forms of life invisible. And from Bakhtin's thoughts; this performance also happens in dialogue with others, where meaning is negotiated more than analyzed. I don't think I'm disagreeing with your core thoughts, just pointing out that performative cognition like Critical Thinking operate within a complex social milieux.

    ReplyDelete
  3. @donkeyfeathers, thanks for the reference--I'll check it out (it's nice working in a library). The learning/unlearning thing is interesting, and I suppose what Kuhn was getting at with his paradigms. I just assume that the ontology is constantly churning, making and unmaking information and connections as perceptions change and conclusions are reached. If we think of this as a giant Markov process, then maybe our beliefs are the fixed points--knowledge that doesn't change during the normal evolution of the ontology. I wonder if there are general properties with regard to an actual intelligence that puts constraints on these. For example, is it possible, as Descartes attempted, to have no beliefs? I think we'd probably agree that in a non-pathological intelligence, enough perceptive evidence could change any belief, but I'm not sure that's necessarily true. Perhaps, every intelligence is endowed with beliefs that no amount of evidence can overturn. This is all fanciful topological musing, though--I'm not sure what use it could be.

    ReplyDelete
    Replies
    1. When you get a chance to read the referenced article you’ll see that for non-pathological intelligence, the unlearning (the literal thinking that is not context bound, and therefore not a priori dependent) and the learning (contextual constructs due to both circumstance and prior knowledge that induces activation-based memory and cognitive control) need to both be present for critical thinking to achieve conclusions with net utility for the thinker. The ‘usefulness’ then stems from knowing that any critical decision that leads to an action would be best prepared for with reconciliation in mind. In order to make a critical decision it is much more important to look past the opportunity cost (Kahneman’s study of the effect that loss has on our thinking) and recognize that belief change happens when the evidence changes (or at least we hope it will because we looking for the most correct answer, not a bias confirmation). Often times in order to recognize the evidence change, we have to perform the irrational act of suspending judgment in order to reach a better judgment. To court judges this is common practice, to those trying to develop a curriculum around the topic, it can be hard to frame - Dean

      Delete
    2. I just discovered that blogger now has threaded comments. Go figure.

      What you say about context vs non-context bound processes sounds perfectly reasonable. I haven't got my hands on the article yet, but some of what you say resonates with _On Intelligence_ by Jeff Hawkins and Sandra Blakeslee (http://www.amazon.com/Intelligence-Jeff-Hawkins/dp/B000GQLCVE). They argue that the main function of the neo-cortex is to predict the future and handle exceptions when those predictions don't come true.

      I agree about the curriculum bit. Aside from the knee-jerk reaction to "any curriculum modification is a political nightmare", even the easy parts that one can implement as classroom pedagogy are (to me) not obvious. But I have made a conscious effort to get students to do things in the real world in my calculus class this semester. For example, one project was to measure the speeds of cars along the road with and without speedbumps (two locations), and make an argument about the effectiveness of the arrangement.

      Delete
  4. @Howard, I agree about the importance of the social aspects. I tried to capture societal influences in the 'nominal reality' category, but there's a lot more to say about it. If we subtract all that (all the rules and language that we use to navigate the social world), we're left with what I'd call 'cynical', in the original sense.

    It's fascinating to think of, say, a government as an intelligent system, and explore its functions using the model above. The court example is just one that illustrates how these system 'realities' get created. There's an obvious short circuit in any system (biological or not) when it can create nominal realities that satisfy motivations, and cause a gap between perceived and objective reality. A government printing money to pay debts would be an example. There are more examples in the IEET articles I linked to.

    Bakhtin's dialogue is interesting here too, because we have to live inside these systems with their odd versions of reality. For example, if a baseball umpire calls a strike, and the instant replay shows it to obviously be a bad call, then the dialogue between spectators diverges from the official reality (which isn't going to change just because of a video). Who was the umpire who said "it ain't nothin' until I call it?" That captures the system monological approach perfectly.

    ReplyDelete
  5. Dave;
    The sports analogy is a great example of the difficulty in establishing objectivity. Video technology in the NFL corrects for fast action, but it brings more of a sense of objective to the sport that still has much embedded subjectivity. Baseball on the other hand, probably the most analyzed sport of all time, embraces subjectivity; even to the point that each umpire is expected to have a slightly different strike zone and the initial innings are often spent trying to understanding how the ump will be calling pitches on that particular day.

    ReplyDelete
    Replies
    1. Another interesting thing about sports is the overt competition between two motivations that are diametrically opposed (usually with perfect symmetry). This is a good arrangement to discover 'truth'--that is the dialogical version revealed nowadays by videotape. The bureaucracy of sports doesn't yet have fast enough intelligence (in the general sense of the word--I'm not deprecating individuals) to quickly rely on this sort of adjudication, but the day is probably coming when strikes are called by a little AI box that sits behind the catcher.

      At the other end of the 'truth-discovering' spectrum from transparent competition is a monolithic system that has no competition, no means of truth discovery aside from its own motivations. This is an invitation to self-destruction through "delusions of competency". Lysenkoism is perhaps the poster version of this effect, but it's ubiquitous (e.g. North Carolina's attempt to legislate sea level rise). That's the idea I'm exploring with the series in IEET that starts with "Is Intelligence Self-Limiting?".

      Delete
  6. To my mind this is splitting hairs: one cannot find a clever chess move unless one knows the rules. We would probably agree that deductive thinking is absolutely prerequisite to critical thinking, and this point is made throughout the article, where it's included in "domain knowledge.
    cheat critical reasoning test

    ReplyDelete