May 5, 2004 -- Is the sky falling? And if so, when? Even when they're baseless, constant reports about nuclear weapons proliferation, pandemic diseases and environmental catastrophes revive these perennial human questions and contribute to a feeling of unease.
So too did the recent passing of an asteroid almost 100 feet in diameter within 30,000 miles of the Earth. Such news stories make a recent abstract philosophical argument a bit more real.
Developed by a number of people including Oxford philosopher Nick Bostrom and Princeton physicist J. Richard Gott, the Doomsday Argument (at least one version of it) goes roughly like this.
There is a large lottery machine in front of you, and you're told that in it are consecutively numbered balls, either 10 of them or 10,000 of them. The machine is opaque so you can't tell how many balls are in it, but you're fairly certain that there are a lot of them. In fact, you initially estimate the probability of there being 10,000 balls in the machine to be about 95 percent , of there being only 10 balls in it to be about 5 percent.
Now the machine rolls, you open a little door on its side, and a randomly selected ball rolls out. You see that it is ball number 8 and you place it back into the lottery machine. Do you still think there is only a 5 percent chance that there are 10 balls in the machine?
Given how low a number 8 is, it seems reasonable to think that the chances of there being only 10 balls in the machine are much higher than your original estimate of 5 percent. Given the assumptions of the problem, in fact, we can use a bit of mathematics called Bayes' theorem to conclude that your estimate of the probability of 10 balls being in the machine should be revised upward from 5 percent to 98 percent. Likewise, your estimate of the probability of 10,000 balls being in it should be revised downward from 95 percent to 2 percent.
What does this have to do with Doomsday? To see, let's imagine a cosmic lottery machine, which contains the names and birth orders of all human beings from the past, present, and the future in it. Let's say we know that this machine contains either 100 billion names or — the optimistic scenario — 100 trillion names.
And how do we pick a human at random from the set of all humans? We simply consider ourselves; we argue that there's nothing special about us or about our time and that any one of us might be thought of as a randomly selected human from the set of all humans, past, present, and future. (This part of the argument can be much more fully developed.)
If we assume there have been about 80 billion humans so far (the number is simply for ease of illustration), the first alternative of 100 billion humans corresponds to a relatively imminent end to humankind — only 20 billion more of us to come before extinction. The second alternative of 100 trillion humans corresponds to a long, long future before us.
Even if we initially believe that we have a long, long future before us, when we randomly select a person's name from the machine and the person's birth order is only 80 billion or so, we should re-examine our beliefs. We should drastically reduce, or so the argument counsels, our estimate of the likelihood of our long survival, of there ultimately being 100 trillion of us.
The reason is the same as in the example with the lottery balls: The relatively low number of 8 (or 80 billion) suggests that there aren't many balls (human names) in the machine.
Here's another slightly different example. Let's assume that Al receives about 20 e-mails per day, whereas Bob averages about 2,000 per day. Someone picks one of their accounts, chooses an e-mail at random from it, and notes that the e-mail is the 14th one received in the account that day. From whose account is the e-mail more likely to have come?
There are many other examples devised to shore up the numerous weak points in the Doomsday Argument. Surprisingly, many of them can be remedied, but a few of them, in my opinion, cannot be.
That a prehistoric man (who happened to understand Bayes theorem in probability) could make the same argument about a relatively imminent extinction is an objection that can be nicely addressed. Appealing to some so-called anthropic principle whereby inferences are drawn from the mere fact that there are observers to draw them is much more problematic.
In any case, there's probably still time to learn more about the Doomsday Argument and the use of the so-called anthropic principle in philosophy, cosmology and even everyday life. A good place to begin is Nick Bostrom's work, particularly his book, Anthropic Bias.
Professor of mathematics at Temple University and winner of the 2003 American Association for the Advancement of Science award for the promotion of public understanding of science, John Allen Paulos is the author of several best-selling books, including Innumeracy and A Mathematician Plays the Stock Market. His Who’s Counting? column on ABCNEWS.com appears the first weekend of every month.