Study vs. Study: The Decline Effect and Why Scientific 'Truth' So Often Turns Out Wrong

Share
Copy

Not surprisingly, results of experiments and studies with small samples often appear in the literature, and these results frequently suggest that the observed effects are quite large -- at one end or the other of the large margin of error. When researchers attempt to demonstrate the effect on a larger sample of subjects, the margin of error is smaller and so the effect size seems to shrink or decline.

Publication Bias, Other Psychological Foibles

Publication bias is, no doubt, also part of the reason for the decline effect. That is to say that seemingly significant experimental results will be published much more readily than those that suggest no experimental effect or only a small one. People, including journal editors, naturally prefer papers announcing or at least suggesting a dramatic breakthrough to those saying, in effect, "Ehh, nothing much here."

The availability error, the tendency to be unduly influenced by results that, for one reason or another, are more psychologically available to us, is another factor. Results that are especially striking or counterintuitive or consistent with experimenters' pet theories also more likely will result in publication.

Even such a prosaic occurrence as clock-watching provides illustration of this. I don't think I look at the clock more than others do, but I always seem to notice and remember when the time is 12:34, but not when it's 10:56 or 7:41.

Scientists are, of course, subject to the same foibles as everyone else. When reading novels or watching movies, most of us make an attempt to suspend our disbelief to better enjoy a good story. When doing health or other scientific studies, researchers usually try to do the opposite. They attempt to suspend their belief to better test their results.

Alas, they sometimes succumb to a good story and fiddle with the results to preserve its coherence. This was part of the problem in the recent hyped accounts of arsenic-based life.

There's also the problem of poor experimental design and the sometimes unknown confounding variables (even different placebos) whose effects can mask or reverse the suspected effect. The human tendency to exaggerate results and to indulge one's vanity by sticking with the initial exaggeration cannot be dismissed either.

A greater realization of these effects by journalists, scientists, and everyone else will lead to more caution in reporting results, more realistic expectations, and, I would guess, a decline in the decline affect (more accurately, the stat-psych effect).

Getting at the truth has always been hard. Nevertheless, we can take comfort in the fact that, though nature is tricky, she is not out to trick us.

That is probably what Einstein meant when he wrote, "God is subtle, but he is not malicious."

John Allen Paulos, a professor of mathematics at Temple University in Philadelphia, is the author of the best-sellers "Innumeracy" and "A Mathematician Reads the Newspaper," as well as, most recently, "Irreligion." He's on Twitter and his "Who's Counting?" column on ABCNews.com usually appears the first weekend of every month.

Page
  • 1
  • |
  • 2
Join the Discussion
blog comments powered by Disqus
 
You Might Also Like...