Tuesday, November 03, 2009

Learning and Intelligence

In "What is Learning?" I had fun with the idea of logic and learning, and definitions of learning. Today I'd like to first illustrate how simple the ingredients for learning can be and then turn to assessment of such things and a surprising twist on that.

First, the article yesterday hinted that there was a resolution to the philosophical problem of how a deductive process can learn. Clearly it's possible because the world around us seems to be ruled by deductive physical laws, and yet we can learn to play music and lots of other cool things. Learning is a prerequisite for survival, in fact, and living things are inductive machines. So how is this possible?

Mathematicians like simplicity, and the perfect example is one that illustrates all the complexities necessary to understand a problem without extraneous details. Donald Michie thought very hard about machines that could learn. Despite having worked with and befriended Alan Turing, and having worked at Bletchley Park to crack German cyphers using the first real electronic computing machines, he didn't have ready access to what we would call computers when he created a simple learning machine from matchboxes. You read that right.

On BoingBoing you can find "Mechanical computer uses matchboxes and beans to learn Tic-Tac-Toe." It uses 304 matchboxes, each labeled with a tic-tac-toe game position, and markers (beans or beads) inside the box that represent the next move to be made. Over many games, the winning strategies are identified by adding or removing beans according to a simple rule. It's brilliant.
The ingredients are:
  1. A deterministic process (rules indicating which matchbox to use next)
  2. Randomness (randomly choosing the next move based on what markers are inside the matchbox corresponding to the current state of play), and
  3. Memory (markers kept in each box)
Memory can be created with logic and time as we saw last time. Randomness popped up yesterday in the proposed solution to the induction problem with the introduction of Bayes' Rule: conditional probabilities. The markers inside each box represent conditional probabilities of the next move given the "priors" -- what has happened already.

Given this, it may not be surprise to learn that there is a market for true randomness. See this service, for example, that advertises bits generated from quantum processes.

Now to assessment. We often concern ourselves with looking at "authentic learning outcomes" from student work, in order to judge how much they've learned. For the most part, we teach them to use tools of rationalism: how to understand and ultimately produce knowledge that fits into the structure of an established discipline. I think we can loosely say that we try to make them more intelligent, or at least enable them to do more intelligent things (in case you think intelligence is fixed and malleable, but I don't know what the difference is).

The Nov 2. NewScientist article "Clever fools: Why a high IQ doesn't mean you're smart" may turn how you think about assessment sideways. The article says of IQ tests that they are good at assessing logic, abstract reasoning, learning ability, and memory. However:
But the tests fall down when it comes to measuring those abilities crucial to making good judgements in real-life situations. That's because they are unable to assess things such as a person's ability to critically weigh up information, or whether an individual can override the intuitive cognitive biases that can lead us astray.
Arguments that IQ is not all there is to intelligence have been around probably as long as the tests themselves, but what I found new in this article was the connection to mental processes that control the use of intelligence.
[U]nlike many critics of IQ testing, [professor of human development and applied psychology at the University of Toronto, Canada] Stanovich and other researchers into rational thinking are not trying to redefine intelligence, which they are happy to characterise as those mental abilities that can be measured by IQ tests. Rather, they are trying to focus attention on cognitive faculties that go beyond intelligence - what they describe as the essential tools of rational thinking.
Here's an example from the article. Consider the logic puzzler:
Jack is looking at Anne, and Anne is looking at George; Jack is married, George is not. Is a married person looking at an unmarried person?
Possible answers are "yes," "no," and "can't be determined." The answer is given at the bottom.

I encountered this line of thought in my survival research too, writing:
But there is a problem with rationality. A perfectly rational being has no particular reason for preferring existence to non-existence. In fact, a perfectly logical being has no reason to do anything. Consider what I call the Decider's Paradox. Our perfectly logical robot is presented with some environmental data. What is its first question? If it has one, it must be "What should my first question be?" Similarly, its second question must be "What should my second question be?" No other types of questions are possible without an illogical answer to the first one. Perfect logic alone is not enough to work with; some kind of emotional state is also needed to allow considered decisions to take place.
Emotional states control when intelligence is used. The article notes that 44 percent of Mensa members said they believed in astrology in one survey.

The implications for assessment are clear. It's not sufficient to know what students are capable of doing rationally; it's just as important to know if they will employ those tools when they need them. The solution to the puzzle above illustrates this very well (I got it wrong). Here's the problem and solution from the article:
Jack is looking at Anne, and Anne is looking at George; Jack is married, George is not. Is a married person looking at an unmarried person?

If asked to choose between yes, no, or cannot be determined, the vast majority of people go for the third option - incorrectly. If told to reason through all the options, though, those of high IQ are more likely to arrive at the right answer (which is "yes": we don't know Anne's marital status, but either way a married person would be looking at an unmarried one). What this means, says Stanovich, is that "intelligent people perform better only when you tell them what to do".
This is another reason to consider metacognition and noncognitives when constructing the desired outcomes of a curriculum. Think about all those ethical reasoning and civic engagement goals. What good is it if students abstractly know what they "should" do, but have no inclination to actually do it?

No comments:

Post a Comment