Medical Statistics Don't Always Mean What They Seem to Mean

From Absolute Risk to Mortality Rates, Better Ways to Present Medical Results

COMMENTARY
By JOHN ALLEN PAULOS

Jan. 3, 2010—

Medical statistics are often misunderstood (perhaps up to 72.381672 percent of the time).

Sometimes they're consciously spun. More often they're just phrased in an opaque way.

Given all the stories in recent weeks on cancer screening (mammograms, psa tests, pap tests), the effectiveness of medicines, drugs and supplements (statins, tamoxifen, vitamin D), not to mention the focus on the health care bill, it is a good time to briefly discuss a few better ways to present medical results.

Relative Risk vs. Absolute Risk

To make my points general, I'll refer to an abstract cancer X rather than any particular real cancer, which is anything but abstract.

That being said, imagine that a headline announces that screening for cancer X reduces deaths from it by 25 percent. Imagine as well that another headline announces that screening cuts deaths from cancer X by about 1 in 1,000, reducing the rate from 4 in 1,000 to 3 in 1,000.

These two headlines can describe the same result, the first expressing relative risk and the second absolute risk. All things being equal, describing results in terms of absolute risk gives a more informative picture than does describing them in terms of relative risk.

Absolute risk allows you to see the magnitude of the benefit, which sometimes is not all that great, especially if the downside of the screening (false positives, over-treatment and the debilitating and disfuguring effects of radiation, chemotherapy and the like) is not immediately apparent.

We should always ask for the reduction in absolute risk and not just for the reduction of the relative risk. If cutting out an occasional portion of pork rinds, say, reduces your risk of some cancer by 67 percent, you may decide they're worth it if you know your absolute risk will decline from three in a million to one in a million.

Survival Rate vs. Mortality Rate

Now consider two different countries that treat the same disease, cancer Y, quite differently. The first country touts the fact that its five-year survival rate for Y is 100 percent, seemingly a great success. The other country admits that its 5-year survival rate for Y is 0 percent, seemingly a debacle.

But let's look at the mortality rates instead of the survival rates. It might be that in both countries there are approximately 45 cancer Y deaths per 100,000 people. How could this be? Simplifying to make the point, let's assume that if people in either country contract Y, they always get cancer in their mid 60s and live to the age of 75.

In the first country, there is an early screening program that identifies people with cancer Y in their mid-to-late 60s. Because these people live to 75, the five-year survival rate is 100 percent. The second country has no screening program at all, and people with Y are not diagnosed until they experience actual clinical symptoms in their early 70s. Their five-year survival rate is thus 0 percent.

The example is unrealistic and extreme, but a situation similar to it explained why Rudy Giuliani's superficially convincing claim a few years ago that American medicine was so much better than British medicine was wrong.

The five-year survival rate for prostate cancer is higher here, but mortality rates in the two countries do not differ that much. Because of the so-called lead time bias associated with survival rates, mortality rates often provide a clearer picture of a cancer.

Probabilities vs. Frequencies

As a number of psychologists have shown, people tend to understand frequencies better than probabilities. Thus "event A occurs in 3 outcomes out of 10" is better understood than "the probability of the event A is 30 percent." Even frequencies are misunderstood by some.

Told that cancer X kills 2,850 of 10,000 people and that cancer Y kills 28.5 out of 100 people, many believe Cancer X to be worse than Y. Presumably even worse would be a cancer that killed 28,500 out of 100,000. This tendency to be impressed by the larger numbers is sometimes called ratio bias.

Probabilities and frequencies, even if well-understood, can give rise to counterintuitive results. This is especially relevant to the issue of false positives in mammography, but I'll state it generally.

Let's consider yet another cancer Z and a test for it that satisfies the following three conditions:

1.) The probability a person has cancer Z is 1 percent.

2.) If the person has Z, the test is positive 95 percent of the time.

3.) If the person doesn't, the test is still positive 3 percent of the time.

Presented as frequencies the conditions are:

1.) On average 1 out of 100 people have Z.

2.) Of 100 people with Z, 95 will test positive.

3.) Of 100 people who are Z-free, 3 of them will test positive.

However these conditions are presented, the crucial question is what fraction of those people who test positive for Z actually have it. The surprising answer (see below) is about 24 percent, a calculation that studies show many doctors are unable to perform.

Statistical terms from p-values to odds ratios and confidence intervals are often misinterpreted by patients and doctors alike. (This gives an unfortunate new meaning to a double-blind test.) Probability and statistics are often seen as cold subjects, not mindful of the individual.

Uncertainty and trade-offs, however, are an inevitable part of life, and a proper and humane understanding of them can help minimize the number of patients turning prematurely cold.

Answer: I'll assume readers do not suffer from ratio bias and that tests for Z are given to 100,000 people. By assumption, 1 percent of them or 1,000 (.01 x 100,000) will have Z. Since 95 percent of these 1,000 will test positive for Z, there will be approximately 950 (.95 x 1,000) positive tests. But 99,000 (100,000 - 1,000) people are healthy.

Nevertheless, by assumption 3 percent of them or 2,970 (.03 x 99,000) will also test positive. These latter will be false positives.

Thus, we have a total of 3,920 positive tests (2,970 + 950) of which only 950 are true positives. That is, only 950 of the 3,920 positive tests indicate cancer.

In other words, the probability of cancer Z given that one has tested positive for it is only 950/3,920 or 24 percent.

John Allen Paulos, a professor of mathematics at Temple University in Philadelphia, is the author of the best-sellers, "Innumeracy" and "A Mathematician Reads the Newspaper," as well as (just out in paperback) "Irreligion: A Mathematician Explains Why the Arguments for God Just Don't Add Up." His "Who's Counting?" column on ABCNews.com appears the first weekend of every month.