Who's Counting: Which 'Experts' Make Better Political Predictions?

Nov. 5, 2006 — -- An election is coming up, and pundits, commentators and assorted pooh-bahs are full of predictions.

"Expert Political Judgment," a recent book by University of California psychologist Philip Tetlock, suggests that we shouldn't be too impressed by these predictions. With some exceptions, they're not much better than those generated by a good pair of dice.

Tetlock convinced almost 300 experts to make quantifiable predictions about events that might or might not come to pass. The people, all of them professionals paid for "commenting or offering advice on political and economic trends," made more than 80,000 predictions over the period of the study.

Although wrong so very frequently, the unnamed experts, in the same manner as supermarket tabloid psychics, are rarely held accountable -- and, when they occasionally are, they cite the usual litany of excuses for their mistakes:
     They were right in principle, but their timing was off.
     A wholly improbable event caused their reasoned and reasonable prediction to fail.
     If only their failed predictions were interpreted correctly, they would have been seen to be correct.
     If some weasel word were taken one way rather than another, they'd be right.
     Their prediction was wrong, but it nevertheless demonstrated a correct analysis.

Some of their predictions are only slightly disguised versions of empty forecasts, like "Things to Stay the Same Until They Change" or "Things to Change Eventually."

The Linda Problem and Other Psychological Foibles

Many of the experts' mistakes result from common psychological foibles that afflict most people. One is the so-called Linda problem. A woman, Linda, is described as very bright, single, outspoken, a philosophy major and an anti-nuclear activist. Given this background, you are asked to say which of the following descriptions of her is more likely: (a) she's a bank teller, or (b) she's a bank teller who is an active feminist.

Most people will choose "b," because the feminist characterization fits comfortably with her background. Nevertheless, "a" is the more likely description, because for it to be true, only the condition that she's a bank teller needs to be met. In order for "b" to be true, however, two conditions must be met -- that she's a bank teller and that she's an active feminist. Satisfying two conditions is always less likely than satisfying one.

In generating predictions, experts, since they either know so much or have such strong preconceptions, tend to weave more elaborate scenarios than others do. Because of the mistaken intuition underlying the Linda problem, these scenarios can seem more plausible but are often less likely.

This tendency to be swayed by extraneous facts and unexamined preconceptions is just one common failing. Experts, like commentators, politicians and everybody else, often fall victim to confirmation bias. This is the tendency, once you've made even a tentative decision, to avidly search for evidence confirming the wisdom of your decision and largely ignore evidence disconfirming it.

Garden variety instances of confirmation bias occur in areas as disparate as the stock market, personal relations and legal conflicts. A more extreme case is the prosecution of the Iraq War, which is a textbook example of, among other things, an almost delusional myopia and the staggering enormity of the consequences to which it can lead.

A variety of cognitive illusions studied by Amos Tversky, Nobel Prize winner Daniel Kahneman and other psychologists are also discussed in "Expert Political Judgment." The book repeatedly underscores the idea that experts are as vulnerable to these illusions as most other people.

Hedgehogs, Foxes and the Markets

Tetlock did uncover one trait that the better predictors seemed to possess more than the less-successful ones. To name it, he used historian Isaiah Berlin's famous distinction between the hedgehog and the fox. Hedgehogs are thinkers who know one big thing, are in thrall to one overarching idea. Foxes, on the other hand, know many little things -- tricks, factoids, techniques -- and are wary of grand pronouncements and monolithic forces. Although less confident and less given to self-puffery, foxes make better predictors.

Foxes also more readily change their predictions in the light of unexpected events. Hedgehogs don't, in part because they're more subject to hindsight bias --failing to remember their own mistaken predictions or retrofitting them to the facts. Alas, however, it is the hedgehogs who steadfastly adhere to simple (and often simplistic) ideas who generally become better known. Complexity confuses; simplicity sells.

Predicting political and economic events is a bit like picking stocks. Investors try to sense what the majority of other investors think about a particular stock before buying or selling. The task is very difficult, in part because the other investors are all trying to do the same thing. This self-referential and self-correcting aspect of the market makes unrelenting bullishness or bearishness appealing perhaps, but, overall, unprofitable. Those investors sensitive to subtly shifting investor attitudes do better.

Pundits, too, try to sense the mood of other pundits, policymakers and the general public before pontificating. This task, too, is very difficult, in part because many of the predicted are also predictors trying to do the same thing. This self-referential and self-correcting aspect of punditry makes predictions generated by big-idea hedgehogs memorable perhaps, but overall less reliable. Foxes, who "see the world as a shifting mixture of self-fulfilling and self-negating prophecies" that tend to "kick in as people recognize that things have gone too far" do better.

Uniting the two types of forecasts are the often surprisingly accurate prediction "markets" run by the University of Iowa and other organizations. In them, analysts can put at least a little bit of their money where their big mouths are. Such markets require clearly stated predictions. They bring the discipline of the market to forecasts that are often so nebulous as to be unfalsifiable. (As of this writing, the Iowa market predicts the Democrats will likely take the House but not the Senate.)

As Tetlock and others show, insights from cognitive science and probability theory have a role to play in the dicey business of predicting politics. Still, the best bet is described by Eugene Ionesco. The playwright wrote, "You can only predict things after they've happened."

Professor of mathematics at Temple University, John Allen Paulos is the author of best-selling books, including "Innumeracy" and "A Mathematician Plays the Stock Market." His "Who's Counting?" column on ABCNEWS.com appears the first weekend of every month.