Blogger informs me that this is the 300th post to this blog. By way of celebration, I gave it a face lift using the more modern tools available now, so that the recent posts are displayed on the left as well as the tag list. Ain't technology grand?
I came across an interesting paradox on
Reddit, and have improved it (I think). Here it is:
If you were to choose one of the responses below using a uniformly random selection, what is the probability that you would choose a correct answer?
A) 0%
B) 25%
C) 25%
D) 50%
A will be guessed 25% of the time, so we can't say A is the correct answer, or it contradicts itself.
Suppose B is correct. Then C must also be correct. But together they make 50% of the choices, which means neither of them can be correct.
Clearly D can't be correct, since it only occurs once.
So none of A-D can be correct, and there is a zero chance of guessing the answer, right? But wait--that means that A was correct after all! So there's a 25% chance of guessing right, but wait--that means B and C are correct, but we'll guess that 50% of the time, but wait--....
How can you get out of this? I have a proposed solution below the break, but it's kind of a cop-out.
The problem seems to be with self-reference. The question "what is the probability that you would choose a correct answer?" means "what is the probability that you would choose a correct answer
to this problem?" We could create similar problems using it as a model. For example:
The correct answer to this question is 'False'
___True
___False
That one's not very subtle. It's just a variation on the classic Liar Paradox. You can read about it and theories for resolution on
Wikipedia's page, and there's another nice article on it
here. Note that Stephen
Yablo creates such a paradox without self-reference (
link), using an infinite list of linked references.
Let me say up front that I'm not a logician. I will undoubtedly either 1. trip over some elementary mistake, or 2. repeat something discovered 35 years ago. I won't let that spoil the fun; paradoxes are too entertaining to leave to the logicians and philosophers.
First, forget about Truth. We already know, thanks to
Gödel, that even modestly complex axiomatic systems like
Peano Arithmetic cannot be proven to be self-consistent. So if we're honest, a proposition in some axiomatic system could have these truth values:
- Proven true with a deductive argument based on axioms, contingent on not being contradicted.
- Proven false with a deductive argument based on axioms, contingent on not being contradicted.
- Proven contradictory: proofs for true and false both exist (this is usually a BAD THING)
- Proven Indeterminate (a logical proof exists that we can't assign true or false--more on this later)
- Unknown (we have no proof one way or the other)
That sound you heard was the
Law of the Excluded Middle popping. We can imagine that we've numbered all the possible well-formed propositions like "1+1=3" (provably false) and so on in a
loooong list. We make a table out of those, numbering the propositions 1,2,3... and list beside each one the status. There are infinitely many true ones and infinitely many false ones, and we hope no contradictory ones.
So about this indeterminacy thing. To approach that, we need to construct a proof machine for finding if a proposition
p is contingent-true. We'll call it T. It works like this: given a number
n that corresponds to a proposition
p, it starts assembling arguments from axioms in a systematic way so that all possible arguments are covered, building ever more complex ones, to see if
p is true or false. That is, it tries to find arguments for
p and for
~p systematically. We can imagine it alternating between the two. Of course, T is completely impractical--it's just a theoretical tool. If we give T a proposition p that's not provable, it will run forever and not give us an answer. Nevertheless, it's very useful to imagine. Let's say that our first two propositions are number zero, which is just False, and number one, which is True. So if T ever stops running, it will return a zero for false and a one for true, the numbers representing the propositions.
For example, if we are clever and figure out that a proposition has a proof, then we can be sure that T(
n) =1. If proposition 123 is "1+1=3", we can be sure that T(123) = 0 even if we don't actually run the program described; it would eventually find the same proof we did, or a shorter one.
Now that we know what we're doing, we will allow the T function to be included in propositions. In order to do this, we have to imagine that T can run a recursive copy of itself when needed. So, for example, it can resolve this situation, where the numbers on the left are the proposition index numbers.
12: T(13) = 1 in other words, proposition 13 is true
13: 1+1=2
In order to find T(12), we have to imagine the computation of T(T(13)). The 'outside' T has to pause and wait for the 'inside' T to finish. If it ever does finish, it will have a zero or one presented to it. In this case,
T(12) = T(T(13)) = T(T(1))
Since the truth of 0 or 1 is trivial to ascertain, T(T(1)) = 1, and T(T(0)) =0. Now suppose for some proposition
k we have:
k: T(k) = 1
If we want to say that proposition
k is true (contingent, of course), we have to be sure that T would eventually find a proof for it, if left to run long enough. Would it? Let's see what would happen.
T(k) = T(T(k)) = T(T(T(k))) =....
Recall that in order to call proposition
k true, we have to be able to argue that T will terminate with a 1 result. Clearly it won't. It also won't terminate with a 0 result. So proposition
k is provably indeterminate.
We could argue thus: since T(T(
n)) = T(
n) we can just telescope all those Ts and say that that T(
k) = 1. But if we do that, we've constructed new forms of argument--a meta-logic outside the system described above. Of course, T is only provably indeterminate in meta-logic too. There are choices we can make about what flavor of logic we like.
The Liar's Paradox becomes trivial--it's the same example with a logical twist.
k: T(k) = 0
The English version is "This statement is false." T still
recurses infinitely, and we have to say that proposition
p is provably indeterminate (from our point of view).
We can create variations that include loops that are not strictly self-referential:
m: T(n) = 0 another way to write this is ~T(n)
n: T(m) = 0
This is like two squabbling teens pointing at the other saying "he's wrong!" and "no, she's wrong!". What's the value of T(
n)? We can
watch it self-destruct:
T(m) = T(~T(n)) = T(~T(~T(n))) = ...
It should be clear that
any self-referential or looped sequence of propositions will never allow T to terminate. The example at the top of the post would clearly qualify, although the rules for the propositions would have to be clearly articulated in a language. This may sound unsatisfactory as a punch line.
We could go further. We could imagine a proposition with broader aim:
g: for any number n, T(n) = 1 implies T(~n) = 0 and T(~n) = 0 implies T(n) = 1
This proposition claims that there are no contradictions.
[Edit: not really, since by definition T(
n) and T(
~n) return the same value. We'd need a new "contradiction operator," but it's not clear that this is definable in a useful way. This is probably a severe limitation.
] That would be nice. But if
n can equal
g, then we have an infinite recursion, and
g is indeterminate. But here's an idea. What if we try to exclude
g from the proposition to avoid the recursion?
g: for any number n other than g, T(n) = 1 implies T(~n) = 0 and T(~n) = 0 implies T(n) = 1
But we still have to worry because some of those other propositions out there might contain a reference to
g, and still suck us into a loop. In fact, because
all propositions are eventually on the list somewhere, there's sure to be an infinite number of them. In order to get anywhere, we'd probably have to exclude all references to the T operator.
g: for any number n, where proposition n doesn't use T, T(n) = 1 implies T(~n) = 0 and T(~n) = 0 implies T(n) = 1
Even this won't work because we have an infinite number of cases to check, which is impossible for T in finite time. So this form of proposition can only be determined by T if the set is finite.
Let's look at
Yablo's paradox. He constructs a list of propositions like this:
2: for all k > 2, T(k) = 0
3: for all k > 3, T(k) = 0
4: for all k > 4, T(k) = 0
etc
This doesn't leave much room for other propositions, but we can work around that. If any one of these is true, the following ones are false. But in that case, there must be a true one out there somewhere, violating the premise. So none of these can be true to avoid a contradiction. But if, say number 2 is false, there must exist at least one that follows where T(
k) =1, another contradiction. What to do?
If we imagine our T machine trying to evaluate T(2), it's obvious that it will never finish because it has to check an infinite number of cases. So all of these are indeterminate.
Maybe this construction throws the proverbial baby out with the bathwater. But it does seem to take care of lots of of paradoxes.
Concluding note: when I think about a paradox like Liar's, my brain finds itself in a loop "true means false, false means true, etc." This is similar to
constructions in programming languages that are useful for various things:
x := ~x
This flips the variable x back and forth between true and false as long as the code is run. So what is quite normal in programming seems paradoxical in ordinary logic. The difference is time. If we allow variables (truth, even) to change over time, we can accommodate a paradox. Here's a fanciful notion: maybe time is just the universe's way of resolving paradox.
Update: There's a problem with the definition of T, but I think it's fixable. Having T simultaneously try to find a proof for p and a proof for ~p is cute, but it causes problems. The most severe is that if we ourselves find a proof for p, we cannot assume that T(p)=1, because T just might find a shorter proof for ~p. This can't happen unless the axioms contradict themselves, but we haven't assumed that. One way out is assume up front that the axioms are not contradictory--kind of an insurance rider: "in case of axiom failure, all results below are invalid." This is a practical approach, and the kind of thing we have to implicitly assume anyway. Another solution is to have T only check for provability of p and not for ~p. This is more powerful because it allows us to entertain propositions like
g above. I assume that we adopt the convention that "~
p" means "the number of a proposition that corresponds to the logical negation of the proposition numbered
p."
Readers who are interested in this topic may also find
Löb's Theorem interesting.