Surveys commonly are weighted to the number of telephone lines in each respondent’s home to adjust for the higher probability of selection of multiple-line households. ABC News has studied the effect of such weighting (Merkle and Langer, Public Opinion Quarterly, Vol. 72 No.1, Spring 2008) concluding that it carries the risk of distortion, and, when done properly, has no meaningful impact on the data. ABC News polls therefore are not weighted to the number of household phone lines.
Poll results may deviate from full population values because they rely on a sample rather than a census of the full population. Sampling error can be calculated when probability sampling methods, such as those described here, are employed, using the standard formula (at the 95 percent confidence level) of (SQRT(.25/sample size))*1.96. There can be other sources of differences in polls, such as question wording and order, design effect from clustering in an area probability sample, systematic non-coverage or selection bias.
As a function of sample size, sampling error is higher for subgroups. We analyze subgroups only as small as 100 cases (or very near it), for which the error margin is 10 percentage points. See our fuller description of sampling error here.
A survey’s response rates represents its contact rate (the number of households reached out of total telephone numbers dialed, excluding an estimate of nonworking and business numbers) multiplied by its cooperation rate (the number of individuals who complete interviews out of total households reached).
It cannot be assumed that a higher response rate in and of itself ensures greater data integrity. By including business-listed numbers, for instance, ABC News increases coverage, yet decreases contact rates (and therefore overall response rates). Adding cell-only phones also increases coverage but lessens response rates. On the other hand, surveys that, for instance, do no within-household selection, or use listed-only samples, will increase their cooperation or contact rates (and therefore response rates), but at the expense of random selection or population coverage. (For a summary see Langer, 2003, Public Perspective, May/June: 16-8.)
Research has found no significant attitudinal biases as a result of response rate differences. A study published in 2000, “Consequences of Reducing Nonresponse in a National Telephone Survey” (Keeter, Miller, Kohut, Groves and Presser, POQ 64:125-48), found similar results in surveys with 61 and 36 percent response rates. A follow-up in 2006, “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey” (Keeter, Kennedy, Dimock, Best and Craighill, POQ 70:759-79), based on surveys with 50 and 25 percent response rates, again found “little to suggest that unit nonresponse within the range of response rates obtained seriously threatens the quality of survey estimates.” As far back as 1981, in “Questions & Answers in Attitude Surveys,” Schuman and Presser, describing two samples with different response rates but similar results, reported (p. 332), “Apparently the answers and associations we investigate are largely unrelated to factors affecting these response rate differences.”