Complexity, Randomness and Impossible Tasks

Some things are simple, some are complicated. What makes them so? In fields ranging from biology to physics to computer science, we're often faced with the question of how to measure complexity.

There are many possible measures, but one of the most important is algorithmic complexity. First described by two mathematicians, the Russian Andrei Kolmogorov and the American Gregory Chaitin, it has been extensively developed by Chaitin in recent years.

The flavor of the subject can perhaps be sampled by considering this question: Why is it that the first sequence of 0's and 1's below is termed orderly or patterned and the second sequence random or patternless? (Note that since almost everything from DNA to symphonies to this very column can be encoded into 0's and 1's, this is not as specialized a question as it may at first appear.)

(A) 0010010010010010010010010010010010010010010 …

(B) 1000101101101100010101100101111010010111010 …

Answering this question leads not only to the definition of algorithmic complexity, but also to a better understanding of (a type of) randomness as well as a proof of the famous incompleteness theorem first proved by the Austrian mathematician Kurt Godel.

Hang on. The ride's going to be bumpy, but the view will be bracing.

Complexity and the Length of Programs

With sequences like those above in mind, Chaitin defined the complexity of a sequence of 0's and 1's to be the length of the shortest computer program that will generate the sequence.

Let's assume that both sequences continue on and have lengths of 1 billion bits (0's and 1's). A program that generates sequence A will be essentially the following succinct recipe: print two 0's, then a 1, and repeat this x times. If we write the program itself in the language of 0's and 1's, it will be quite short compared to the length of the sequence it generates. Thus sequence A, despite its billion-bit length, has a complexity of, let's say, only 10,000 bits.

A program that generates sequence B will be essentially the following copying recipe: first print 1, then 0, then 0, then 0, then 1, then 0, then 1, then … there is no way any program can compress the sequence. If we write the program itself in the language of 0's and 1's, it will be at least as long the sequence it generates. Thus sequence B has a complexity of approximately 1 billion bits.

We define a sequence to be random if and only if its complexity is (roughly) equal to its length; that is, if the shortest program capable of generating it has (roughly) the same length as the sequence itself. Thus sequence A is not random, but sequence B is.

The Berry Paradox and Godel's Theorem

Chaitin has employed the notions of complexity and randomness to demonstrate a variety of deep mathematical results. Some involve his astonishing number Omega, which establishes that chance lies at the very heart of deductive mathematics. These results are explained in many of his books, including the forthcoming, "Meta Math" (available on his Web site at Let me here just mention his proof of Godel's theorem.

He begins with a discussion of the Berry sentence, first published in 1908 by Bertrand Russell. This paradoxical sentence asks us to consider the following task: "Find the smallest whole number that requires, in order to be specified, more words than there are in this sentence."

Examples such as "number of hairs on my head," " number of different states of a Rubik cube" and "speed of light in centimeters per decade," each specify, using fewer than the 20 words in the given sentence, some particular whole number. The paradoxical nature of the task becomes clear when we realize that the Berry sentence specifies a particular whole number that, by its very definition, the sentence contains too few words to specify. (Not quite identical, but suggestive is this directive: You're a short person on an elevator in a very tall building with a bank of buttons before you, and you must press the first floor that you can't reach.)

What yields a paradox about numbers can be modified to yield mathematical statements about sequences that can be neither proved nor disproved. Consider a formal mathematical system of axioms, rules of inference and so on. Like almost everything else these axioms and rules can be systematically translated into a sequence of 0's and 1's, and if we do so, we get a computer program P.

We can then conceive of a computer running this program and over time generating from it the theorems of the mathematical system (also encoded into 0's and 1's). Stated a little differently, the program P generates sequences of 0's and 1's that we interpret as the translations of mathematical statements, statements that the formal system has proved.

Now we ask whether the system is complete. Is it always the case that for a statement S, the system either proves S or it proves its negation, ~S?

To see that the answer to this question is "No," Chaitin adapts the Berry sentence to deal with sequences and their complexity. Specifically, he shows that the following task is also impossible (though not paradox-inducing) for our program P: "Generate a sequence of bits that can be proved to be of complexity greater than the number of bits in this program."

The program P cannot generate such a sequence, since any sequence that it generates must, by definition of complexity, be of complexity less than P itself is. Stated alternatively, there is a limit to the complexity of the sequences of 0's and 1's (translations of mathematical statements) generated (proved) by P. That limit is the complexity of P, considered as a sequence of 0's and 1's, and so statements of complexity greater than P's can be neither proved nor disproved by P

This is a sketch of a sketch of Godel's incompleteness theorem. It can be interpreted to be a consequence of the limited complexity of any formal mathematical system, a limitation affecting human minds as well as computers.

The above leaves out almost all the details and is, I realize, almost impenetrably dense if you have never seen this kind of argument. It may, however, give you a taste of Chaitin's profound work.

Professor of mathematics at Temple University, John Allen Paulos is the author of best-selling books, including Innumeracy and A Mathematician Plays the Stock Market. His Who's Counting? column on appears the first weekend of every month.