Tuesday, March 31, 2009

A Learning Paradox

Assessing learning isn't just done in the education industry, nor is it even exclusive to biological organisms. The computer scientists have been trying to teach computers to think for decades now. To paraphrase a saying about fusion power, we might say that true artificial intelligence is just 20 years in the future, and always will be. There have been challenges. Knowing a bit about those challenges might inform the kind of wet-brain learning that we try to measure from across our desks in the institutional effectiveness building (What? You don't have a building yet?).

A standard test for machine intelligence is the Turing Test. Alan Turing established some of the fundamental ideas of computer science, helped crack German cyphers in WW II, independently derived the Central Limit Theorem, among other things. He was also persecuted after the war for being gay and ultimately committed suicide by eating an apple laced with cyanide. I highly recommend The Enigma as a biography.

Rene Descartes didn't believe in thinking machines. Quoting him from the Stanford Encyclopedia of Philosophy:
If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others.
This is essentially the test Turing considered. In 1950 he muses about something the he calls the imitation game:
I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. [source]
First of all, note that in 1950, computers were massive glowing things with very limited capacity by our standards. For Turing to correctly guess that by 2000, hard drives would be measured in Gigabytes is flat out astounding.

The main point, however, is that Turing poses as the test of intelligence an interview. The question is simple: are we convinced that the other side of the conversation is intelligent?

You can try out Turing's imitation game yourself, no doubt running on computers with gigahertz processor clocks. Just browse to jabberwacky.com and start chatting. Below is an excerpt of an argument I had with the thing about what day of the week it is. Its responses are in bold.


There are practical applications to all this. It's not just that we might look forward to mechanical devices that can understand plain language and converse with us, but the Turing test protects us from the annoyances of the bad kind of machine. For example, 'bots' may automatically try to log into this blog and leave a comment with a link to some site the owner wants to advertise. Google tries to prevent that with CAPTCHA--hard to read images that one has to interpret in order to proceed. The one below is taken from the Wikipedia page:
It's easy for humans, but hard for machines to decipher these. This brings us to the learning paradox I advertised in the title: Moravec's Paradox. The Wikipedia describes it thus:
Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, the uniquely human faculty of reason (conscious, intelligent, rational thought) requires very little computation, but that the unconscious sensorimotor skills and instincts that we share with the animals require enormous computational resources.

What the computer scientists would be the hard part was what humans find hard--manipulating symbols and doing logic. What they discovered is that the hard part is actually the kinds of things we learn without much effort--how to talk, walk, see, and judge our surroundings, for example. This is a fascinating result.

An evolutionary explanation is proffered. The idea is that the older the cranial wiring is, the harder the function is to reverse engineer. So sight and visualization, for example, are hard to reproduce with computers because they've been fine-tuned by hundreds of millions of years of evolution, and optimized to the point where it's effortless for most of us to see and understand our surroundings. (Not all--see The Man Who Mistook His Wife for a Hat.)

In terms of higher education learning outcomes, this carries some warnings. Standardization is an algorithmic approach to assessment, for example. This falls into the category of stuff that's easy for computers. Notice that Turing didn't suggest that we give a machine a paper and pencil IQ test. I think few people would be convinced of machine intelligence if all it did was get some vocabulary and logic problems correct (for example). He suggested instead a messy, subjective form of assessment for intelligence. The paradox suggests that some kinds of thinking are very old and likely difficult to assess with discrete methods, while others (the more recently acquired by the species) may be easier because they are more suited to a regular, algorithmic-type approach of judging success.

Here is a an example of a situation that requires intelligence. One might say critical thinking. See what solution you come up with. I got it from here.

According to a news report, a certain private school in Victoria, BC recently was faced with a unique problem. A number of year 12 girls were beginning to use lipstick and would put it on in the bathroom. That was fine, but after they put on their lipstick they would press their lips to the mirror leaving dozens of little lip prints. Every night, the maintenance man would remove them and the next day the girls would put them back. Finally the principal decided that something had to be done.

First think of your own solution, then read the 'official one.'

She called all the girls to the bathroom and met them there with the maintenance man. She explained that all these lip prints were causing a major problem for the custodian who had to clean the mirrors every night. To demonstrate how difficult it had been to clean the mirrors, she asked the maintenance man to show the girls how much effort was required. He took out a long handled squeegee, dipped it in the toilet, and cleaned the mirror with it. Since then, there have been no lip prints on the mirror. There are teachers, and then there are educators.

This story is funny because of the creative solution. I requires a deep knowledge of social norms, human emotion, and game theory to really grok it. This is the kind of thing that computers may be able to come up with eventually, and when they do and can prove it through conversation, we'll perhaps name them intelligent. It's also the kind of thinking that's very hard to assess with an algorithmic approach, for the same reason it's hard to program--it taps ancient and deep resources within our minds.

See Also: More Thoughts on Moravec's Paradox

No comments:

Post a Comment