The Dunning–Kruger effect is a cognitive bias in which unskilled people make poor decisions and reach erroneous conclusions, but their incompetence denies them the metacognitive ability to recognize their mistakes. The unskilled therefore suffer from illusory superiority, rating their ability as above average, much higher than it actually is, while the highly skilled underrate their own abilities, suffering from illusory inferiority.
Actual competence may weaken self-confidence, as competent individuals may falsely assume that others have an equivalent understanding. As Kruger and Dunning conclude, "the miscalibration of the incompetent stems from an error about the self, whereas the miscalibration of the highly competent stems from an error about others" (p. 1127). The effect is about paradoxical defects in cognitive ability, both in oneself and as one compares oneself to others.This just puts some research behind what Bertrand Russell is quoted as having said:
The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt.So what we have is two epistemologies, and we shouldn't be hasty to choose one as better than the other, despite the obvious bias of the quotes above.
Method 1 (Closed). Obtain a small amount of evidence, and create the most restrictive explanation that fits the facts. Subsequent facts that come to surface do not affect the conclusion.
William of Occam would probably sue me for defamation if he were around to read this. I have intentionally restated his principle in a very narrow sense in order to contrast it with:
Method 2 (Open). Continually gather information and create increasingly complex explanations that account for all the observations. Although the current explanation may be the simplest one that fits the facts, no explanation is ever final--all the others that are consistent with facts are kept in reserve.
I have given the methods intuitive names for convenience (closed vs open), not as prejudgments. The closed method will be the better one in situations where observations can be explained simply. This may be because the underlying cause and effect relationship is of low complexity, or perhaps that the variance in observed characteristics is small. "All dogs have four legs" would be an example of the latter. "Stuff falls when you drop it" applies to the former.
The most basic structure of language is a verb applied to a noun, which is a model for the closed epistemology. "Birds fly," "Fire burns," and so on, are summaries of real world observations that can be arrived at accurately from just a few examples and without much error. It's an easy conjecture that these simple relationships became so integral to understanding that exceptions were met with challenge. Such as: "If an ostrich doesn't fly, then it can't be a bird." This is what school children encounter when they learn that a whale isn't a fish. The language we use rather gracelessly allows these exceptions in the form of conjunctive appendices, but this is clearly a hack. I will suggest below that a formal language is required to overcome that difficulty (for example, expressions of formal logic, which defines a consistent way of using "or" and "and," and allows unlimited nesting of exceptions, so that any true/false relationship can be expressed unambiguously).
Quickly assembling a set of closed rules for a new environment seems like a good idea. It's a fast best-guess approach to finding useful cause and effect relationships.
Of course, the closed method is not suitable to doing science. Khun's The Structure of Scientific Revolutions suggests that closed outlooks solidify at any level of complexity, and require some bashing to break up. An example would be the certainty (due to Aristotle) that celestial bodies move in perfect circles. This is like Gould's idea of "punctuated equilibrium" in biological evolution. I graphed the associated relationship between predictability and complexity recently in "Randomness and Prediction."
The question is when to use the open versus closed approach. Historically, I think the closed approach may have had a blanket "explanation" in the form of mystical associations of cause and effect, which provides a putative low-complexity relationship. "Joe got struck by lightning because he displeased the weather god" has the appearance of an explanation, except that it's not actually predictive. It takes a dedicated effort to discover that fact, however. For that we need an open method.
The disadvantages of the open method make a long list. First, it's energy intensive--you have to continually be making observations, comparing what you see to what you think you should see (e.g. three-legged cat), and updating the every-growing explanation. It also takes more energy to use or communicate the current explanation, and as soon as you do, it's out of date again.
These are not fatal flaws, but ones to be considered. For some phenomena, this is probably how we naturally reason, if in a limited way. For example, our memory and minds do something like Bayesian reasoning (updating the probability of an event based on how frequently we encounter it), although our on-board system has been shown to be deeply flawed (see Daniel Kahneman's recent book, for this and a lot more).
Perhaps the open process needs a kind of empirical 'clean-up' to be really useful. Elegant explanations generally only work with clean data. That is, if you want to discover Newtonian mechanics, it's unlikely that you can do this with just your eyes and ears. When Galileo began measuring the "drop" times on an inclined plane, he was onto something.
In addition to a solid empirical methodology, an open method also needs a way to reduce the size of an explanation while retaining its predictive power. In my graphs in "Randomness and Prediction," I plotted predictability versus complexity, not size. It works like this.
Suppose I have an observed relationship that I have cataloged like this: (1,2), (2,4), (3,8), (4,16), where this might be thought of as a cause and effect. A one 'causes' a two, and so on. Because my empirical methods are sound, I trust that there's not too much error in the observed values. As the list grows by using the open method, I have a better and better 'explanation' of past events and a better and better predictor of future ones (fine print about the inductive hypothesis goes here...). But the list will become too unwieldy to remember, communicate, or use effectively, as the observations accumulate. What I need is a kind of data compression to reduce the list to a manageable size. If I do this correctly, the explanation doesn't change, nor does the complexity, but the size does. I can reduce it to effect = 2^cause if I have the idea of an exponential function. We might call this data reduction the creation of a formal theory.
I started by wondering if people who don't know things, and further don't know that they don't know them, could be attributed to one of the two epistemologies mentioned at the beginning. I think the argument above shows that it's possible that the two barriers of empiricism and abstract thinking needed to effectively use an open method are too formidable for a lot of people. For one thing, it's not hard to get by using closed systems, and it may require formal education in scientific method and meta-cognition to effectively use open systems.
One final note appropriate to the calendar in the US: it's a lot easier to communicate closed explanations than open ones. Even with data compression, "things fall" is less complex than Newton's laws. So in a debate made with sound bites from political candidates, the closed epistemology wins. It's easier, it's comfortable to the listener--the whole construct of English is build to 'hack' a closed way of thinking by adding a few contingencies ("Cats have four legs, but I once saw one with three.")--and the explanations take up less time to say. You have to expand "Drill!" into "Drill, baby drill!" to make it bigger because the basic message can be summed up in one word, and that may seem too short for some audiences as a serious thought.
This is just another reason why we should be deliberate about teaching science and meta-cognition in school, not as alien ways of thinking that only people in white coats use at work, but as the mode of thinking that differentiates us from the other mammals, and might allow us someday to collectively make good decisions.