Thursday, December 14, 2006

Complexity and Recognition

I actually had time to think today. Many days feel like being the subject of one of Skinner's experiments.

So I sat outside and ate the sandwich Adelheid made me for lunch. It was a sunny 65 or so. I was reading a book on the philosophy of science, which title eludes me at the moment. It was weeded out of the library collection, and I rescued it from the give-away cart. In the introduction, the author writes about perception and understanding, although he's a lot more concrete about it. He uses examples like Tycho Brache and Johannes Kepler looking at the sunrise. One sees evidence that the earth moves around a fireball, and the other evidence that the fireball moves around the earth. That is, the sensations hitting their respective retinas are roughly the same, but their understanding of the matter is quite different. Are they seeing the same thing? In the prosaic sense, yes, but in a higher order definition of seeing, they are not. Why not?

This is obviously going to be speculative, but here's a thought. What if we percieve by applying simple rules to the input that comes in on our optic nerve, etc. So if we see an asymmetrical blob loping along the ground we might identify the stick-like pieces as legs, notice that there are four, and identify it as a quadraped. The size of the thing and length of legs narrows it down quickly to a few possible animals, since it's animated. With a small amount of information we can pinpoint it as dog or dingo, depending on what continent we live on.

This suggests that our brains do a continuous data compression of input. Patterns that are identified reinforce the existing definitions, and those that aren't must be decided upon. Are they noise or are they meaningful? Is the creak in the middle of the night just the house settling, or is it a madman with a power tool? This brings us back to Skinner. If a cause is associated with an effect (immediately in the simplest case), then perhaps we file it away as meaningful. Otherwise it's ignored.

But effects have to be noticed. In this over-simplistic model, the ability to entertain the notion that an effect has been observed drives one's acquisition of knowledge (i.e. rules of inference). I imagine that some people have lower thresholds than others. If it's too low, you'd see portents in every random occurence. Perhaps you'd become paranoid and removed from (average) reality. The opposite affliction might be to continually ignore signs that your internal beliefs were wrong--like Brache's was. Finding the middle ground and staying there probably requires something special. I wonder what it is, and I wonder if it can be taught.

UPDATE: The book is Patterns of Discovery by Norwood Russell Hanson, Cambridge University Press 1969.

No comments:

Post a Comment