Complexity, Randomness and Impossible Tasks

Some things are simple, some are complicated. What makes them so? In fields ranging from biology to physics to computer science, we're often faced with the question of how to measure complexity.

There are many possible measures, but one of the most important is algorithmic complexity. First described by two mathematicians, the Russian Andrei Kolmogorov and the American Gregory Chaitin, it has been extensively developed by Chaitin in recent years.

The flavor of the subject can perhaps be sampled by considering this question: Why is it that the first sequence of 0's and 1's below is termed orderly or patterned and the second sequence random or patternless? (Note that since almost everything from DNA to symphonies to this very column can be encoded into 0's and 1's, this is not as specialized a question as it may at first appear.)

(A) 0010010010010010010010010010010010010010010 …

(B) 1000101101101100010101100101111010010111010 …

Answering this question leads not only to the definition of algorithmic complexity, but also to a better understanding of (a type of) randomness as well as a proof of the famous incompleteness theorem first proved by the Austrian mathematician Kurt Godel.

Hang on. The ride's going to be bumpy, but the view will be bracing.

Complexity and the Length of Programs

With sequences like those above in mind, Chaitin defined the complexity of a sequence of 0's and 1's to be the length of the shortest computer program that will generate the sequence.

Let's assume that both sequences continue on and have lengths of 1 billion bits (0's and 1's). A program that generates sequence A will be essentially the following succinct recipe: print two 0's, then a 1, and repeat this x times. If we write the program itself in the language of 0's and 1's, it will be quite short compared to the length of the sequence it generates. Thus sequence A, despite its billion-bit length, has a complexity of, let's say, only 10,000 bits.

A program that generates sequence B will be essentially the following copying recipe: first print 1, then 0, then 0, then 0, then 1, then 0, then 1, then … there is no way any program can compress the sequence. If we write the program itself in the language of 0's and 1's, it will be at least as long the sequence it generates. Thus sequence B has a complexity of approximately 1 billion bits.

We define a sequence to be random if and only if its complexity is (roughly) equal to its length; that is, if the shortest program capable of generating it has (roughly) the same length as the sequence itself. Thus sequence A is not random, but sequence B is.

The Berry Paradox and Godel's Theorem

Chaitin has employed the notions of complexity and randomness to demonstrate a variety of deep mathematical results. Some involve his astonishing number Omega, which establishes that chance lies at the very heart of deductive mathematics. These results are explained in many of his books, including the forthcoming, "Meta Math" (available on his Web site at http://www.cs.auckland.ac.nz/CDMTCS/chaitin/). Let me here just mention his proof of Godel's theorem.

He begins with a discussion of the Berry sentence, first published in 1908 by Bertrand Russell. This paradoxical sentence asks us to consider the following task: "Find the smallest whole number that requires, in order to be specified, more words than there are in this sentence."

Page
  • 1
  • |
  • 2
null
Join the Discussion
You are using an outdated version of Internet Explorer. Please click here to upgrade your browser in order to comment.
blog comments powered by Disqus
 
You Might Also Like...