A mathematician and computer scientist, Gregory Chaitin (1947(?) - ) has done as much as any worker in his chosen field of foundational mathematics other than Turing and Godel, perhaps, to advance our understanding of undecidability, uncomputability and randomness.

Founder and primary architect of algorithmic information theory, Chaitin has significantly extended the work of Godel and Turing, using his information-theoretic approach to incompleteness to uncover pure randomness lurking in the guts of any viable mathematical system, but at the same time providing a powerful unifying formalism which provides hope for integrating such apparently devastating results with the rest of the mathematical universe.

In between shaking the foundations of mathematics, he's found time to pursue a 30-year career as a programmer and researcher into theoretical computer science and physics at IBM, write seven books, travel for many years giving essentially the same lecture on incompleteness, and enjoy honorary and visiting doctorates and professorships from universities around the world

Complexity and randomness

As a precocious 10 year old, with the run of the adult section in the New York public library and Columbia University, Chaitin became interested in maths through astronomy and physics and started a program of self-study. Blissfully ignorant that a generation of mathematicians were trying their best to ignore the incompleteness results of Turing and Godel, he became fascinated with the foundational questions they raised about mathematics.

Disliking Godel's incompleteness proof ("too tricky") and finding Turing's approach more natural, he decided that a deeper understanding must be possible, something that would give us more insight into the effects and extent of uncomputability.

While taking the entrance exam for the Columbia University Science Honors Program, at age 15, Chaitin hit on the idea that incompressibility could be the key to defining randomness. He was beginning to see randomness as a fundamental to undecidability, reasoning that computation requires the presence of pattern, order, structure, and that randomness requires the absence of these. Answering a question on what could be deduced from the presence of a pin on the moon, Chaitin decided that as the structure of a pin could be encoded into a short Turing machine program - compressed - it could not be a random occurrence and must therefore be a constructed object. (Needless to say, Chaitin was admitted to the Honors Program!)

Building on this insight, Chaitin worked his ideas into a long paper submitted to the only peer-reviewed journal of theoretical computer science at the time, ACM Journal, in 1965. In it, he developed the notion of program-size complexity as a measure of information content. The information content of a sequence of binary digits is to be given by the length (in bits) of the shortest program which will exactly produce it as output (later, Chaitin coined the phrase elegant program to refer to these shortest programs). He used this idea to provide a definition of randomness - roughly that random sequences of bits are just those ones which are incompressible, there is no simple pattern or algorithm that will produce them which is shorter in length than themselves, so their program-size complexity is equal to their own length (in bits).

Unknown to the young Chaitin, R.J. Solomonoff had published a paper including the claim that program length could be used as a quantitive measure of the "simplicity" (in Occam's sense) of a scientific theory. In Moscow, the eminent Russian probability theorist A.N. Kolmogorov independently produced a definition of randomness very similar to Chaitin's, and his own theory of program-size complexity. The publication of Chaitin's own paper was much delayed - it eventually appeared in two parts in 1966 and 1969 - and according to Chaitin's book The Unknowable, the ACM referee, Donald Loveland, sent the entire unpublished paper to Chaitin's unknown rival Kolmogorov, immediately on receiving it. Chaitin describes this as "unfortunate".

There were technical differences between the two competing definitions, and subsequent refinements by Chaitin show that both were flawed. Nonetheless program-size complexity is widely known as Kolmogorov complexity, a fact which seems to irritate Chaitin a little (though "Kolmogorov-Chaitin complexity" is now sometimes used). The branch of information theory using this measure has come to be known as algorithmic information theory.

Berry's paradox and incompleteness

In 1966, Chaitin moved with his parents to Buenos Aires, beginning there a long career with IBM the following year, obtaining work as a programmer. In 1970 he visited Rio de Janiero and found there a copy of the manual for MIT LISP 1.5. Shortly on his return he learned the language by writing his own interpreter, the first in a series of many he was to write as an aid to his mathematical theorizing.

While in Rio de Janiero, around the time of Bertrand Russell's death, he used Berry's Paradox to begin to formalise his insights into the relations between program-size complexity and Godelian incompleteness.

Berry's paradox, popularised by Russell, involves items like "the smallest positive integer that can't be specified in less than fourteen words" - a specification which is itself only thirteen words long.

Chaitin was aware that Godel's famous incompleteness result was obtained by refining a different paradox,

"this sentence is false"
to read
"this statement is unprovable in formal axiomatic system FAS."
The more precise statement resolves the paradox and gives us the incompleteness theorem.

First, Chaitin took Berry's paradox and turned it into

"the smallest positive integer that cannot be specified in a program of less than N bits, for some universal Turing machine U"
(to "specify" a number, the program must simply print it out and halt).

If we consider formal axiomatic systems, FAS, used for proving things about specifying numbers, we can take a similar step to Godel's and change this to

"the smallest positive integer which can be proved (in FAS) to have no specification (for U) of length less than N bits".

However, we can also imagine a relatively simple program for U, call it P, which checks through all the proofs in FAS and when it finds a proof that some positive integer, I, requires a program of N bits to specify it, prints out I and halts.

The length of P is given by a constant, the size of a short extension of a proof-checking program for FAS, plus the log to base 2 of N - the largest size we'd need to use to represent N in the program.

Now we're in trouble, because P is effectively a specification of I (it just prints out I and halts, by definition) and if N is large enough, P has length significantly less than N - it's just

log2N + cFAS.

But we supposedly have a proof in FAS that specifications of I are of length not less than N. Contradiction.

Chaitin's result, then, is that where N is much larger than cFAS, there can be no proof in FAS that some integer I requires a specification of at least length N.

Which is to say that a lower bound on program-size complexity cannot be derived in any FAS which itself has a complexity (cFAS) much lower than that bound. To prove that a particular object has program-size complexity of N, you need (more or less) "N bits of axioms", as Chaitin is fond of saying, meaning that the size of the shortest proof-checker for the formal system used must be not less than N.

Of course, since there must really be integers that require specifiers of much greater length than cFAS for any FAS, Godel's result, that there will be true but unprovable statements for any formal axiomatic system, follows immediately!

This information-theoretic approach, with its mysterious promise of somehow quantifying uncomputability, is characteristic of Chaitin's work on incompleteness, and a relation to formalisations of Berry's paradox is a constant theme in his later results. Where Turing and Godel say "undecidable!" "cannot be proven!" Chaitin will say "N bits of axioms!". In 1974, Godel granted Chaitin an appointment to discuss their different approaches to incompleteness, after reading a paper of Chaitin's along the lines of the above, but due to bad weather and Godel's failing health the appointment was never kept.

Algorithmic information theory

Inspired by the inability of Kolmogorov's rival formulation of program-size complexity to deal correctly with random infinite binary sequences, spotted by a young Per Martin-Lof, in 1974 Chaitin refined his own definition of complexity to require that the programs whose size is considered must be self-delimiting.

Kolmogorov had defined an infinite binary sequence to be random where all their "prefixes" are incompressible (the "prefixes" are just the finite sequences you find at the beginning of any infinite sequence - their initial n elements). However, any sufficiently long prefix of an infinite random sequence will contain long series of repeated digits - long sequences of 1's, say, all together. Since these are obviously compressible (for example by simple run-length encoding) Kolmogorov's definition of infinite random sequences fails to include a single one.

Chaitin's solution to this, developed during a visit in 1974 to IBM's New York Watson Research Centre, was to require that the programs must include within themselves the information of how long they are, making them "self-delimiting". Since in general it takes log2N bits to represent a length N, the programs' lengths are mostly increased by adding a factor of log2. It turns out that this makes Kolmogorov's condition, that the prefixes of an infinite binary sequence should have program-size complexity equal to their own length, actually work. The complexity ranges between N and N + log2N, instead of between N - log2N and N, and the intensional definition of the set of random infinite sequences which is derived coincides exactly with the quite differently formulated one proposed by Lof to replace Kolmogorov's.

Together with his incompleteness result, above, this newly rigorous definition of randomness has the interesting consequence that though it's comparatively easy to prove a nonrandom sequence to be so (just find some way of compressing it) it can never be proven that any random sequence is random, where the sequence has a length much greater than the number of "bits of axioms" (cFAS, the size of the proof-checker) for the axiomatic system used for the proof.

Chaitin introduced these results in the opening plenary session of the 1974 IEEE symposium on Information Theory, and published his new self-delimiting version of program-size complexity in his 1975 paper A theory of program size formally identical to information theory in the ACM Journal, which he regards as finally establishing algorithmic information theory on a firm footing as a field, the previous efforts of himself and Kolmogorov belonging to its "pre-history". Apart from the application to random sequences, the requirement for self-delimiting programs enforces that the complexity of two sequences added to one another is never greater than their separate complexities added (plus a small constant), making program-size complexity an additive quantity.

Around the same time, Chaitin devised his infamous constant, Omega, another development made possible by his idea of self-delimiting programs. Chaitin showed that for any universal Turing machine U, whose valid programs are self-delimiting, the probability that it will halt on random input is a real number less than one. This "halting probability" is the Omega for U.

This had been tried before, by Solomonoff, but without the requirement for self-delimitation attempts to calculate Omega would always produce divergent series, giving a halting "probability" of infinity! With Chaitin's constraint, a series converging to a value between 0 and 1 can be produced, by considering Omegan: the probability that a random valid program of not more than n bits will halt in less than n seconds. As n increases, the series of successive Omegan converges on a real number between 0 and 1, Chaitin showed.

The sum of an infinite series of rational numbers (each halting program of length N contributes 2-N), Omega is itself irrational, and the sequence of its digits is incompressible, making it random under Chaitin's stringent definition of randomness. This means that it is also normal - any finite subsequence of length k appears in it as often as all the other k-length subsequences. Moreover, the rate at which the series of Omegan converges is itself uncomputable (though it's obviously very slow). The bits of Omega exhibit no structure, order, or pattern - knowing the first 25 bits helps you not one whit in calculating the 26th bit, knowing all the odd bits makes it just as hard to calculate the even ones. The time needed to calculate the first n bits of Omega grows at the rate of the busy beaver function of n. In order to prove you have the first n bits of Omega, you need n bits of axioms. This is about as unknowable as a well-defined sequence of bits can be.

Random arithmetic and experimental mathematics

Chaitin spent the next decade doing research at IBM (he was given a post at the Watson Research centre, and moved to New York), mostly into RISC chip technology. But inspiration struck again, when in the mid-eighties he was invited by Cambridge University Press to contribute the first volume for their series: Cambridge Tracts in Theoretical Computer Science. Taking a LISP interpreter and "transforming it into an equation using ideas of Jones and Matijasevic", then seasoning with a LISP program which computes approximations to Omega, Chaitin produced a 200-page exponential diophantine equation with the property that if its single parameter is set to N, then the Nth bit of Omega (for the interpreter) is 0 only when the equation has a finite number of solutions (counting no solutions as finite). It follows that since the Nth bit of Omega is random, then the answer to the question are there a finite number of solutions with parameter N? is also random, something which is true "for no reason".

While irreducible randomness in bizarre constructions like the halting probability was an intriguing oddity, the same thing in a "simple" equation, with only integer terms and ordinary arithmetic involved, was more provocative. When Chaitin's book, Algorithmic Information Theory, was published in 1987, it caused quite a stir. Ian Stewart's piece on it in Nature, The Ultimate in Undecidability, set the tone for a plethora of complementary and enthused popularisations, and Chaitin was asked to write pieces for Scientific American, Recherche and New Scientist. In 1991, he was feted in Vienna, in Godel's old classroom, invited to give a lecture that was publicised in Vienna's Der Standard under the banner "Out-Godeling Godel".

Chaitin has continued to refine and develop his ideas, combining them with ever more elegant LISP interpreters into practical courses which let you see the main points of his theories with the help of programs written in his own specialised variant of LISP. He's delivered these courses in universities around the globe, and continues to publish new material, and work on collaborations with friends. Much of his work is freely available via his website, where you can also find multiple lectures given about foundational questions in mathematics.

On the significance of his own brand of information-theoretic incompleteness results, Chaitin suggests that they support a view of mathematics as a quasi-empirical discipline, that rather than searching for the One True Axiom Set, mathematicians should feel free to experiment with "more bits of axioms".

In a recent article for EATCS Bulletin, June 2002, he says:

I think that incompleteness cannot be dismissed and that mathematicians should occasionally be willing to add new axioms that are justified by experience, experimentally, pragmatically, but are not at all self-evident. (In my opinion that P is not equal to NP is a good example of such a new axiom.) Sometimes to prove more, you need to assume more, to add new axioms! That's what my information-theoretic approach to incompleteness suggests to me.

Of course, at this point, at the juncture of the 20th and the 21st centuries, this is highly controversial. It goes against the current paradigm of what mathematics is and how mathematics should be done, it goes against the current paradigm of the nature of the mathematical enterprise. But my hope is that the 21st century will eventually decide that adding new axioms is not at all controversial, that it's obviously the right thing to do! However this radical paradigm shift may take many years of discussion and thought to be accepted, if indeed this ever occurs.


Books by Gregory Chaitin


All information herein extracted from Chaitin's own website at

http://www.cs.auckland.ac.nz/CDMTCS/chaitin/

where you can find most of his published output (including the books!) available freely as either HTML, postscript or PDF. Also included are many reviews, interviews, and variations of his favourite lecture on incompleteness and mathematical foundations, which he's also published in most of his books: A century of controversy over the foundations of mathematics.

Log in or register to write something here or to contact authors.