It cannot be assumed that a higher response rate in and of itself ensures greater data integrity. By including business-listed numbers, for instance, ABC News increases coverage, yet decreases contact rates (and therefore overall response rates). Adding cell-only phones also increases coverage but lessens response rates. On the other hand, surveys that, for instance, do no within-household selection, or use listed-only samples, will increase their cooperation or contact rates (and therefore response rates), but at the expense of random selection or population coverage. (For a summary see Langer, Public Perspective, May 2003.)
Research has found no significant attitudinal biases as a result of response rate differences. A study published in 2000, “Consequences of Reducing Nonresponse in a National Telephone Survey” (Keeter, Miller, Kohut, Groves & Presser, POQ 64:125-48), found similar results in surveys with 61 and 36 percent response rates. A follow-up in 2006, “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey” (Keeter, Kennedy, Dimock, Best & Craighill, POQ 70:759-79), based on surveys with 50 and 25 percent response rates, again found “little to suggest that unit nonresponse within the range of response rates obtained seriously threatens the quality of survey estimates.” Still another Pew comparison, in 2012, with a yet lower response rate, had similar results. As far back as 1981, in “Questions & Answers in Attitude Surveys,” Schuman and Presser, describing two samples with different response rates but similar results, reported (p. 332), “Apparently the answers and associations we investigate are largely unrelated to factors affecting these response rate differences.”
In spring 2003 ABC News and the Washington Post produced sample dispositions for five randomly selected ABC/Post surveys at the request of Prof. Jon Krosnick, then of Ohio State University, for use in a study of response rates. The cooperation rate calculations produced by Krosnick’s team for these five surveys ranged from 43 to 62 percent, averaging 52 percent; response rates ranged from 25 to 32 percent, based on what AAPOR describes as a “very conservative” estimate of the number of business and nonworking numbers in the sample (known as “e”). The range was 31 to 42 percent using a more common estimate of this variable proposed by Keeter et al. in 2000.
In their study (“The Causes and Consequences of Response Rates in Surveys by the News Media and Government Contractor Survey Research Firms,” in Advances in Telephone Survey Methodology, Chapter 23, Wiley 2007), Holbrook, Krosnick and Pfent concluded, “lower response rates seem not to substantially decrease demographic representativeness within the range we examined. This evidence challenges the assumptions that response rates are a key indicator of survey quality.”
For further discussion of response rates, and related issues, see our 2012 presentation at the annual meeting of the American Association for Public Opinion Research, “In Defense of Probability: Are the Times a Changin’?,” here.
Pre-election polling presents particular challenges. As Election Day approaches these polls are most relevant and accurate if conducted among voters. Yet actual voters are an unknown population – one that exists only on (or, with absentees, shortly before) Election Day. Pre-election polls make their best estimate of this population.