Problem With Drug Studies: The Patients?

Some believe strict regimen followers skew drug results, even with placebos.

May 3, 2009 — -- An acquaintance in graduate school once described to me his father's business and its sad demise.

He claimed that his father, years before, had run a large college preparation service in a small country. My friend's father advertised that he knew how to drastically improve applicants' chances of getting into the elite national university.

Hinting at inside contacts and claiming knowledge of the various forms, deadlines and procedures, he charged an exorbitant fee for his service, which he justified by offering a money-back guarantee to students who were not accepted.

One day, the secret of his business model came to light. All the material that prospective students had sent him over the years was found unopened in a trash dump. Upon investigation it turned out that he had simply been collecting the students' money (or rather their parents' money) and doing nothing for it. The trick was that his fees were so high and his marketing so focused that only the children of affluent parents subscribed to his service, and almost all of them were admitted to the university anyway. He refunded the fees of those few who were not admitted.

He also was sent to prison for his efforts.

Although it's not completely analogous, this story came to mind when I read a recent blog posting in February by Dr. Michael Eades in which he questioned the efficacy of statins in lowering all-cause mortality.

More on this later, but one much less controversial point made by Eades is that miracle drugs, diets and exercise regimens almost always leave at least a tiny room for doubt because they can never be confirmed by randomized, double-blind, placebo-controlled studies.

In such studies, researchers randomly assign people to one of two similar groups, provide the members of one group the drug being tested and the members of the other an identical-seeming placebo. Everything is coded so that neither the researchers nor the subjects know who is receiving what. When the study is completed, the results are analyzed to see if the drug really has any statistically significant effects.

With diet and exercise programs this platinum standard of randomized, double-blind, placebo-controlled studies is clearly impossible because, unless they're zombies, people know whether and how they've been exercising or dieting. With many drug tests, there is the related problem of which subjects adhere to the drug regimen.

A Simple Case

This was part of the problem, for example, with hormone replacement therapy (HRT). For years, it was seen as conferring cardiovascular benefits on the women who took the combination of estrogen and progestin because they suffered somewhat fewer heart attacks, strokes and the like.

There was, it's now clear, a biasing variable in these earlier studies: The women involved were, for the most part, relatively affluent and health-conscious, and their lower rate of cardiovascular problems was probably due to this and not to their taking HRT.

In fact, a large subsequent study on HRT published in the Journal of the American Medical Association overturned the conventional wisdom about the treatment. Eliminating the self-selection bias, it found that the relative risks of heart attack, stroke and breast cancer were 1.29, 1.41, and 1.24, respectively, suggesting that women taking HRT incur, respectively, 1.29, 1.41 and 1.24 times the risk of women not taking it.

A More Interesting Twist

More intriguing is the case of the pre-statin heart drug clofibrate, which Eades cites. As the New England Journal of Medicine reported almost 30 years ago, the five-year mortality rate among the 1,100 or so men treated with this drug was 20.0 percent, whereas the rate for the nearly 2,800 men given the placebo was 20.9 percent. Nothing mysterious there. The drug was useless.

But, and here's the odd part, those men who filled 80 percent or more of their prescriptions over the time interval had a 15.0 percent five-year mortality rate whereas those who filled their prescriptions less than 80 percent of the time suffered a 24.6 percent mortality rate.

Even more odd is that similar percentages resulted for those who were diligent about taking the placebo. There was a 15.1 percent mortality rate for them and a 28.3 percent mortality rate for those who took the placebo less than 80 percent of the time. Something a bit more than confirmation bias, confounding variables and a simple placebo effect was going on.

A more recent 2005 article in Lancet showed the same anomalous finding of comparable percentages when the heart drug candesartan was pitted against a placebo in a randomized study. More generally, a 2006 meta-analysis of the effect of adherence on medical outcomes in the British Medical Journal found the same thing.

Those who adhered to what was offered them, whether to the drug or to the placebo, were somehow different from those who did not and they fared significantly better.

Given this phenomenon, it makes sense to ask how one can evaluate the effectiveness of drugs in cases where adherence may mask the uselessness of expensive drugs?

Eades, who advocates a low-carbohydrate/high protein diet in his several books on nutrition, has written extensively about adherence and related matters on his popular medical Web site -- specifically, as mentioned, regarding the anti-cholesterol statin drugs. (Eades' statins article: http://www.proteinpower.com/drmike/statins/more-statin-madness/#more-2656)

He suggests that further analysis of a large study reported in the February 2009 issue of Archives of Internal Medicine might reveal that the seemingly positive effects of statins announced there may be merely a consequence of the adherence effect. That is, he claims that, like clofibrate, candesartan and other drugs, statins may not do much of anything for all-cause mortality.

A few other researchers agree with his intriguingly iconoclastic claim, but it's important to note that the it is very much at odds with the present medical consensus on statins. (The matter, however, is extremely complicated, the patient groups very diverse, and the relevant studies of quite disparate types.)

Nevertheless, I emphasize that one needn't agree with the claim to realize that the adherence effect is quite general. The point is methodological, not medical. Moreover, instances of it might help explain the efficacy of belief and adherence in many realms other than medicine, drug testing, and college admissions.

A notable example is religion, where other enhancers of the placebo effect, such as a recognized "brand" and dispensation by an authority figure, also play a role.

John Allen Paulos, a professor of mathematics at Temple University, is the author of the best-sellers "Innumeracy" and "A Mathematician Reads the Newspaper," as well as "Irreligion: A Mathematician Explains Why The Arguments for God Just Don't Add Up." His "Who's Counting?" column on ABCNews.com appears the first weekend of every month.