Let's start by painting in some of the background in
broad strokes, people. Take your seats. We're about to begin, as you well know. Ready?
Good.
About 2,600 years ago, the ancient Greeks started to play with mathematics. Other civilizations had dealt with numbers before — the Egyptians and Babylonians, most notably — but it was the Greeks who first studied mathematics as an end unto itself, rather than a means to something like building pyramids or finding the size of a plot of land. The Greeks took math and abstracted it from its corporeal surroundings. They were the first to devise logical proofs of mathematical facts, such as the irrationality of the square root of two and the Pythagorean theorem. Euclid's Elements was the culmination of over three centuries of mathematical work by the Greeks, culling theorems and results from varied sources and presenting them together in a single structured work, starting from foundations and working logically up to more complex theorems. And so it was that for the next two millennia, mathematics was held up as the standard of logical rigor and elegance, and Euclid's work was seen as the ultimate truth, the perfect foundation of the perfect science. And because of this special status that mathematics came to hold, an interesting thing started to happen: people got lazy. Methods of proof got sloppy. This went on until, over the course of the 19th century, it became increasingly clear to the mathematicians of the world that the rigor of mathematics had somehow been lost. Imaginary numbers and irrational numbers had run wild, somehow transforming from mere bookkeeping devices into essential mathematical tools, despite the fact that nobody really knew what they were. W. R. Hamilton had invented strange mathematical objects called quaternions that seemingly violated all known rules of algebra — which brought the mathematical community to the shocking realization that they didn't even know the rules of algebra. And worst of all, it appeared that Euclid's geometry was not a necessary truth. Wild and alien geometries had been stumbled upon that flew in the face of Euclid, but were internally consistent. Gauss even suggested that they were physically possible — that theoretically, they could actually describe the world we live in. Suddenly, everything — the Pythagorean theorem, calculus, even the rules governing things as basic as addition and multiplication — was called into question. Something had to be done. And so it was in the latter half of the 19th century that the mathematical community set out to give mathematics a sound logical foundation.
But then they ran into more problems. When they began discussing the foundations of mathematics, the mathematicians realized that they did not all agree about such things as acceptable forms of mathematical proof, laws of logic, and what sort of mathematical entities actually "exist," if any do at all. The math world split up into four main philosophical camps: the logistic school, the formalist school, the set-theoretic school, and the intuitionistic school. The details of their philosophical differences need not concern us here; all that matters for our purposes are the major differences between the first three and the fourth of these schools. The logistic, formalist, and set-theoretic schools had some differences, but what they all had in common was the goal of formalization: they each wished to come up with a formal system of axioms and rules of inference which could function as a rigorous logical foundation for all of mathematics, thereby securing math's position as the "inside track" to absolute truth. The intuitionists shared the latter desire — to secure math's position — but they did not think that logic was the way to do it. They believed that:
Mathematics is a human activity which originates and takes place in the mind. It has no existence outside of human minds; thus it is independent of the real world. The mind recognizes basic, clear intuitions. These are not sensuous or empirical but immediate certainties about some concepts of mathematics.1
Accordingly, they believed that only
mathematical objects and
rules of inference which were
directly comprehensible to human intuition were acceptable in mathematics. Imaginary numbers,
infinite sets,
limits, calculus, irrational numbers, most of
non-Euclidean geometry — these and much else was rejected by the intuitionists. Many even rejected the
law of excluded middle and any mathematical proofs using it, insisting that any math proof claiming the existence of something must produce it, rather than
proving its existence indirectly by showing the opposite to be impossible. The most extreme of this camp became known as
constructivists, insisting that the only sort of proofs that hold any
water are
constructions. You get the idea. Of course, the rest of the math world thought that the intuitionists were absolutely nuts. After the initial break somewhere around
1900, the camps squabbled with each other for about 30 years. Promising progress in
axiomatization was made in all of the camps minus the intuitionists, who struggled to
reconstruct all of known mathematics using only the (very limited) tools they left themselves – and without much success.
Then Gödel came.
There's no need for us to go into the technical aspects of Gödel's results here. It's commonly thought that he totally dashed the hopes of the formalists, the logicists, and the set-theoreticians. This isn't quite true, but it's close. Gödel's incompleteness theorems don't say that mathematics can't be formalized — they simply say that any formalization complex enough to deal with all of mathematics will either be:
- logically inconsistent — i.e., the axioms lead to a contradiction of the form "P and not-P," and thus anything can be proven within the system (since anything follows from a contradiction); or
- logically consistent, but also both:
- incomplete — that is, there will be mathematical truths that cannot be proven within the system; and
- not provably consistent within the system — the system cannot prove its own consistency.
This is not to say that no system can prove its own consistency — that's not true. A system can prove its own consistency
if and only if it is
inconsistent. It won't be
true that the system is consistent, but nonetheless consistency would be
provable. So actually, the ability for a system to prove itself consistent is a sure sign that it's not (this is, in essence, the content of Gödel's second
incompleteness theorem). Another common misunderstanding is to say that no consistent system can be proven consistent in any manner whatsoever. Once again, this is not true. Consistent systems simply cannot prove their own consistency themselves. It is a fairly simple matter to prove the
consistency of one system using a different system. But this of course raises the question of that second system's consistency, so you'd really need a third to prove that one's consistency, but then you don't know if that one is consistent, and so on
ad infinitum. (You may wonder if two systems could be set up in such a way as to prove each other's consistency. It may be possible, but this would not assure the consistency of both, for a third question would arise — the consistency of the two systems together. (This is a little like the "
third man" argument, actually, and with good reason) This consistency could never be proven by either of the two systems on their own, nor by the
union of the two systems, unless of course one or both of the systems were inconsistent, or if both systems were actually consistent, but the union of the two was not.) The point here is that even if you have a consistent system, not only will there be mathematical truths it cannot access, but one of those truths will be its own consistency.
There are many shocking things about Gödel's proof, but the most shocking thing of all might be the fact that, although he himself was not an intuitionist, he accomplished this proof using methods that even the most hard-nosed constructivist would approve of. Gödel didn't just prove the existence of such "undecidable" statements — he gave a method by which such statements could be constructed for any axiomatic system sufficiently powerful to represent mathematics, given the system. The proof was so simple, elegant, and brief that no mathematician, regardless of his philosophical persuasion, could doubt the truth of Gödel's result, and none did, though some were not happy about it. The formalists, logicists, and set-theoreticians were, for a time, devastated. The question had been settled: no axiomatization of any sort could capture all of mathematics and still be of any use. Of course, this was what the intuitionists had been saying all along, and they gloated over it — until they were reminded that their brand of math was utterly useless, unsuitable for use in much or all of calculus, physics, engineering, trigonometry, and many other fields. They couldn't even take square roots of all the integers. In fact, pretty much all they could do was addition, subtraction, multiplication, and division with the integers. The majority of the mathematical world agreed that this was no way to do mathematics. So what to do?
Well, pretty much nothing. Left with no solid ground on which to stand, much of mathematics simply chose to ignore the foundational problems, dismissing them as questions of faith, metaphysics, philosophy, or all of the above — and certainly not their problem. Others chose to look to the physical sciences for their foundation, pointing to the millennia of empirical evidence that the commonly accepted system of mathematics was the proper one for describing the physical world. Hermann Weyl went so far as to say that "'mathematizing' may well be a creative activity of man, like language or music, of primary originality." These responses are not as dumb as they may first seem — it would certainly be shocking on a very basic level if things like 2 + 2 = 4 turned out not to be true, and it would never be as simple as that single fact going out the window. Gödel's results don't mean that every individual mathematical fact is in doubt. They mean that either the whole system we use is flawed, or that it's fine. And they say nothing about their application to the real world. If we wake up one day and find that 2 + 2 = 3, that doesn't mean that our math has been wrong all this time. It just means that our math no longer describes the world accurately. But more on that later. Now it's time to discuss our present situation.
Ok, play the tape. I think I've got it cued up to the right spot. It's this button here, if I'm not mistaken...ah, there we go...
...see, there are a few places that I could start. I could start with my name. My name is Jacob Gardner...no, I'm not trying to be difficult or belligerent. Please get that out of my face. Just tell me where you'd like me to begin. The book? Ok, yeah. I can do it that way. Like this:
There is no need for me to recount the discovery of the Borges novel here; it was all over the papers and I certainly know no more than I gathered from the news. Anyhow, that's not what's important here. All that matters for now is that it was found shortly after the death of Borges's widow Maria Kodama, during what was then my third winter at Mentor University. I speak no Spanish, as you know, so I had to wait until a translation was published the next summer. I bought it immediately, as did two of my best friends, and we had all finished it by the time we returned to Mentor in the fall for what was undoubtedly our final year there. This is necessary information, you understand. I believe I'm making this as brief as possible. Please forgive me if I'm mistaken.
The three of us — Steve, Alex, and myself — met in my room our first night back. The conversation quickly turned to the Borges. I do not recall the precise nature of the early portion of the discussion, but I do know that we eventually got tangled up in a certain textual detail concerning the very end of the novel. In an attempt to resolve it, I pulled out my copy of the Borges and turned to the last few pages. That's when it first happened. Sorry about that. Yes. Clichés are hard to avoid when describing the extraordinary. Yes, I'm aware of that. Thank you very much, but no. I'm afraid that would be impossible. Please allow me to continue. I will try, yes. Yes. Thank you. It looked like maybe the page number was in a different font at first, or maybe it was some kind of misprint. I couldn't make it out properly until I looked right at it for a while. Eventually I could see the number, but it still looked strange somehow. It's hard to explain to...well, there's a sort of relationship with numbers that is developed after working with them for a time. And this number, a page number, the penultimate page number in fact, wasn't familiar to me. Undoubtedly it was a positive integer, and a somewhat large one at that (though still smaller than nearly all the other positive integers), but I couldn't tell much else about it immediately. I can't tell you how disturbing this is. I literally can't. I've worked closely with numbers for the better part of two decades, more closely than most. And yet it took me several minutes of staring at this number simply to determine that it was, in fact, not the number two. Can you let me — just, stop with the — no? Look, it would be easier — see —
"I can't make it out, whatever it is. It didn't even look like a number that time. I don't know how big it is now or whether it's even or odd. I'm not even sure now what an even number looks like! My God!"2
"Calm down. It's even and prime. That's the problem. Look at it again."
"Oh, it's — what have I been saying? Jesus. You're right. It's just even and prime. And not two. And...yeah. Right. That's…kind of odd, no pun intended, isn't it?"
— fair enough. We'll try that. As I was saying, we eventually came to the following conclusions about the page number: it was even; it was greater than two; and, despite our best efforts to prove the contrary, it was in fact prime. In other words, it was an even prime number greater than two. We found this to be highly strange, to say the least. After all, how can a number other than two be both even and prime? If it's prime, it's not evenly3 divisible by any numbers other than one and itself. If it's even, then it's evenly divisible by two. There was no way. We checked it over and over. There was nothing to be done, so far as we could tell. We were right, and it was impossible. Clearly, it was time for bed.
The next morning Steve and I took a walk out in the Mentor University Tree Reservation and discussed the previous night's findings. If you would be so kind as to allow me to...why thank you.
"So think about it. This is precisely the kind of thing that might be found in one of Borges's stories. A sort of ‘crevice of unreality,'4 as he called it. So what are the odds that we would just happen to find this thing in the only novel he ever wrote? Doesn't it seem far more likely that somehow Borges had a hand in it? Granted, Borges died in 1986 and never saw this edition of his work, but he did write the book. So how could he have done this? I'm not sure, but one possible answer – and it's an elegant explanation, if I do say so myself – is that he wrote the book to this end. This was the purpose behind the entire book. Borges found a way to write a book that somehow hijacks the reader's brain, taking your previous associations to the terms ‘prime' and ‘even' and changing them subtly, without you even noticing, to make it possible for an ‘even prime' greater than two to exist, and then he made the book long enough so that the page number on the second-to-last page was that number. And his alteration of your ideas about ‘prime' and ‘even' was so far under your radar that you still freaked the fuck out when you saw what you thought to be an even prime number greater than two. In effect, he altered your manner of perceiving reality without you knowing. I hate to sound like one of those cyberpunk idiots, but the best term for it really might be ‘neurolinguistic hacking.'"
"Wow. So, uh...that's gotta be one hell of a translation from the original Spanish."
"No kidding."
As you can see, we were thinking too small. It did occur to us that Borges might have actually stumbled upon a mathematical anomaly or a full-blown proof of inconsistency, and we also granted that it was possible that this was simply some kind of empirical evidence against the Law of Excluded Middle or a similarly fundamental law. Nonetheless, we considered this to be highly unlikely, not only because of our (retrospectively quaint but not altogether foolish) faith in the logical foundations of reality, but also because of who Borges was. The idea that there was such a basic flaw in the structure of mathematics and/or logic that had been overlooked for millennia was hard to swallow; but the idea that it existed and Borges, the writer, the mathematical layman, was the one who found it was countless orders of magnitude more improbable.
We were satisfied with the "neurolinguistic" explanation for a time. The ground was still beneath our feet, objects were reasonably solid, and the laws of physics seemed largely unchanged. We saw no reason to look for anything deeper or more sinister. Of course, what we had failed to take into account was Borges's idealism...
That should be enough. As Mr. Gardner put it, he and his compatriots were indeed thinking "too small." The ground was reasonably solid, but not for long. We are all familiar with the "phantom ranges" that have intermittently traded places with the mountains, the Babel-like mass linguistic breakdowns, the inversion of the Caspian Sea, and the (presumably) tragic fate of Phoenix. Then there are the statistical anomalies, the "abrupt localized macroscopic decreases in entropy" – the sudden production of a chicken from a cubic meter of gas ten kilometers west of Sydney, the "miraculous" cooling stove of San Rafael, the cold fusion lemonade stand in the suburbs of Manchester. Lastly, there are the so-called psychological phenomena, with which we are all rather intimately familiar.
We cannot be absolutely sure of what it was. Perhaps it was an actual inconsistency in the commonly accepted laws of logic or mathematics manifesting itself. It's also possible that it was simply empirical evidence against those laws, proof that, consistent or no, those were not the logic and mathematics that actually describe our world. "In vain do you pretend to have learned the nature of bodies from your past experience. Their secret nature, and consequently all their effects and influence, may change, without a change in their sensible qualities."5 Hume may be on the right track here. Or perhaps Mr Gardner was, in suggesting that Borges's idealism is at work here. Perhaps we are all victims of the collapse of a self-inflicted reality dreamt up by none other than ourselves. But it no longer matters which of these theories is true. The net effect is the same, and perhaps there is no real difference between any of these alternatives. The crevice has opened wide, and our familiar world has collapsed in on itself. I suspect you know this as I do, for I am no longer sure that there is any difference between us.
1Taken from Mathematics: The Loss of Certainty, by Morris Kline, page 234. Printed by Oxford University Press, 1980.
2 Note the striking similarity to the response of a certain subject of Bruner and Postman's.
3 Just to clear this up – here "even" and "evenly" are two quite separate and unrelated words. "Even" has the definition given in the text, whereas "evenly" is simply a part of the phrase "evenly divisible by." The latter phrase has the following meaning: if an integer X is evenly divisible by another integer Y, then X can be divided by Y with no remainder. We apologize for the inconvenience.
4 See Borges's essay, "Avatars of the Tortoise."
5See Hume, An Enquiry Concerning Human Understanding, section IV, part II.
node your homework