ABC News' Polling Methodology and Standards

A summary of ABC News polling standards and methodology follows.


Langer Research Associates, primary polling provider to ABC News, advises the news division on standards for disclosure, validity, reliability and unbiased content in survey research and evaluates data when requested to establish whether it meets these standards.

On disclosure, in addition to the identities of the research sponsor and field work provider, we require a detailed statement of methodology, the full questionnaire and complete marginal data. If any of these are lacking, we recommend against reporting the results. Proprietary research is not exempted.

Methodologically, in all or nearly all cases we require a probability-based sample, with high levels of coverage of a credible sampling frame. Non-probability, self-selected or so-called “convenience” samples, including internet opt-in, e-mail, “blast fax,” call-in, street intercept and non-probability mail-in samples do not meet our standards for validity and reliability, and we recommend against reporting them.

We do accept some probability-based surveys that do not meet our own methodological standards – in terms of within-household respondent selection, for example – but may recommend cautious use of such data, with qualifying language. We recommend against reporting others, such as pre-recorded autodialed surveys, even when a random-digit dialed telephone sample is employed.

Langer Research Associates has published briefing papers summarizing recent research on non-probability sampling, including opt-in online surveys, the use of social media (and related approaches) to estimate public opinion and challenges in the use of “big data.” We’ve also commented on non-probability sampling in the Fall 2013 issue of the Journal of Survey Statistics and Methodology and in a 2012 presentation at the annual conference of the American Association for Public Opinion Research.

In terms of survey content, we examine methodological statements for misleading or false claims, questionnaires for leading or biasing wording or ordering, and analyses and news releases for inaccurate or selective conclusions.

In addition to recommending against reporting surveys that do not meet appropriate standards for validity and reliability, we promote and strongly encourage the reporting of good-quality polls that break new ground in opinion research.

Sample Design

ABC and The Washington Post direct the methodological approach to full-length ABC/Post polls in consultation with our field work provider for these surveys, Abt SRBI of New York, N.Y. Shorter-length ABC/Post polls are conducted via an omnibus survey produced by SSRS of Media, Pa. See methodological details here. Additionally, as of August 2016, ABC News is partnering with SSRS on polling conducted via the online SSRS Probability Panel; see details here.

Before October 2008, full-length ABC/Post polls were conducted by calling samples of landline telephone numbers only. From October 2008 through June 2015 we added cell phone interviews via a non-overlapping dual-frame sample design, with separate sampling frames for landline and cell phone-only respondents, as detailed in this paper. The cell phone-only proportion, based on data from the National Health Interview Survey, grew from 100 out of 1,000 interviews to 335 per 1,000 during this period.

The non-overlapping design served well, especially through a time in which cell phone interviews were much costlier than landline interviews. However, the cost differential has flattened over time and the incidence of cell phone use has continued to grow, producing a shortfall in the number of young adults reached via this design. As a result, in July 2015 we adopted an overlapping dual frame sample design, in which cell phone respondents are interviewed regardless of whether or not they also have a landline.

The proportion of cell phone interviews again is driven by the NHIS estimate of cell phone-only respondents; to achieve our target, 65 percent of all interviews are conducted by cell phone, with the remaining 35 percent interviewed via landline.


Cell phone and landline samples are produced by Survey Sampling Inc. of Shelton, Conn. For landline interviews, SSI selects a sample of landline households in the continental United States via random digit dialing, in which all landline telephone numbers, listed and unlisted, have an equal probability of selection. Landline numbers are drawn proportionate to their estimated distribution in the country’s nine Census divisions.

SSI starts with a database of all listed landline telephone numbers, updated on a four- to six-week rolling basis, 25 percent of listings at a time. This database of directory-listed numbers is then used to determine all active blocks – as we define it, contiguous groups of 100 phone numbers for which more than one residential number is listed. All possible numbers in active blocks are added to the random digit database.

Until 2005, ABC News followed the industry norm of excluding all listed business numbers (compiled from sources such as Yellow Pages directories and the Dun and Bradstreet Business Data database) from the sample. However, an ABC-led study (Merkle, Langer, Cohen, Piekarski, Benford & Lambert, 2009, Public Opinion Quarterly) found that this “cleaning” process excludes respondents who have home-based business-listed phones and no other lines at home on which they take calls, creating 3 percent noncoverage of eligible households with no offsetting gains in productivity. As a result of this evaluation, we do not exclude listed business numbers from our landline sample, with the exception of those in business-only blocks or exchanges.

Each telephone exchange in the landline sample is assigned to the county where it’s most prevalent. In the first stage of selection, the database is sorted by state and county, and the number of telephone numbers to be sampled within each county is determined using systematic sampling procedures from a random start, such that each county is assigned a sample size proportional to its share of possible numbers. In the second stage of selection, telephone numbers are sorted within county by area code, exchange and active block, and using systematic sampling procedures from a random start, individual phone numbers within each county are selected. The sampled phone numbers are pre-dialed via a non-ringing auto-dialer to reduce dialing of non-working numbers.

For the cell phone sample, SSI begins with a monthly listing of every existing telephone area code and exchange. About half of these are pooled by their producers in contiguous groups of 10 100-block phone numbers, or 1,000-blocks, with information including whether each pooled 1,000-block does or does not include cell phone numbers, either solely or on a shared basis with landline numbers.

All cell-inclusive 1,000-blocks are included in the cell phone sample. For numbers that are not 1,000-block pooled, cell phone service information is available at the exchange level only; therefore all numbers in those exchanges also are included. All numbers used in cell phone sampling are then handled at the 100-block level. Given the absence of any cell phone directory, all 100-blocks in dedicated wireless exchanges and 1,000-blocks used for sampling purposes are considered active.

For exchanges or 1,000-blocks that have been classified by their carrier as providing both landline and wireless service, each 100-block is compared to the database of landline 100-blocks; 100-blocks that appear on the landline frame are removed from the wireless frame and 100-blocks with no directory-listed numbers are retained. This ensures that the wireless frame and list-assisted RDD frame are mutually exclusive while still providing coverage of prefixes and 1,000-blocks that are classified as including both landline and wireless service.

Each 100-block is assigned to a county based on the billing coordinates of the exchange. The database is sorted by county code, carrier name and 100-block. A sampling interval is determined by dividing the universe of eligible 100-blocks by the desired sample size. From a random start within the first sampling interval, a systematic nth selection of 100-blocks is performed and a 2-digit random number between 00 and 99 is appended to each selected 100-block stem.


In each sample, phone numbers are released for interviewing in replicates by Census region (cell) or division (landline) to allow for sample control. Numbers are called multiple times during the field period in multi-night polls; the standard for full-length ABC/Post polls is a minimum of six calls to each number. Interviews are conducted via a computer-assisted telephone interviewing (CATI) system. Abt SRBI’s professional interviewers, and their supervisors, are extensively trained in interviewing practices, including techniques designed to achieve the highest possible respondent cooperation.

For landline respondents, interviewers ask to speak with the youngest male or youngest female at home. Cell-only respondents are screened for age eligibility (18+). Cell-only respondents are not offered compensation, but a reimbursement check is offered if use of minutes is raised as an objection. Cell sample respondents’ place of residence is checked and their Census region adjusted accordingly if necessary.

As of April 2013, Spanish-language interviewing was added to full-length ABC/Post polls for respondents who indicate a preference to be interviewed in Spanish. Spanish-language interviewing in SSRS omnibus surveys began in October 2009.


Data are adjusted to account for the greater probability of respondents who have both a cell and landline phone, compared with those who are cell-only or landline-only. The data then are weighted using demographic information from the U.S. Census and NHIS to adjust for variance from population values. Weights may include average partisan self-identification in current and recent ABC/Post data, based on a standardized rule.

Until 2008 we used cell-based weighting, in which respondents were classified into one of 48 or 32 cells (depending on sample size) based on their age, race, sex and education; weights were assigned so the proportion in each cell matched the Census Bureau’s most recent Current Population Survey data. To achieve greater consistency and reduce the chance of large weights, in January 2008 we adopted iterative weighting, also known as raking or rim weighting, in which the sample is weighted sequentially to Census targets one variable at a time, continuing until the optimum distribution is achieved.

From October 2008 to June 2015, data were post-stratified to Census region by sample type; rim weights then were calculated using Census parameters for age, race/ethnicity, sex and education. The precision of race/ethnicity weights was enhanced in April 2013. In July 2015, post-stratification by sample type was discontinued and Census region and phone service (landline only, dual service and cell-only) were added to the rim weighting variables. Weights are capped at lows of 0.2 and highs of 6.

Surveys commonly are weighted to the number of telephone lines in each respondent’s home to adjust for the higher probability of selection of multiple-line households. ABC News has studied the effect of such weighting (Merkle & Langer, Public Opinion Quarterly, Spring 2008) concluding that it carries the risk of distortion, and, when done properly, has no meaningful impact on the data. ABC News polls therefore are not weighted to the number of household phone lines.

Sampling Error

Poll results may deviate from full population values because they rely on a sample rather than a census of the full population. Sampling error can be calculated when probability sampling methods, such as those described here, are employed, using the standard formula (at the 95 percent confidence level) of (SQRT(.25/sample size))*1.96, plus adjustment for design effects. There can be other sources of differences in polls, such as question wording and order and systematic noncoverage or selection bias.

As a function of sample size, sampling error is higher for subgroups. We analyze subgroups only as small as 100 cases (or very near it). See our fuller description of sampling error here and our online margin-of-error calculator here.

Response Rates

A survey’s response rates represents its contact rate (the number of households reached out of total telephone numbers dialed, excluding an estimate of nonworking and business numbers) multiplied by its cooperation rate (the number of individuals who complete interviews out of total households reached).

Response rates are calculated using sample dispositions. In November 2014 we posted online available sample dispositions for all ABC News and ABC News/Washington Post polls since 1999.

It cannot be assumed that a higher response rate in and of itself ensures greater data integrity. By including business-listed numbers, for instance, we increase coverage yet decrease contact rates (and therefore overall response rates). On the other hand, surveys that, for instance, do no within-household selection, or use listed-only samples, will increase their cooperation or contact rates (and therefore response rates), but at the expense of random selection or population coverage. (See Langer, Public Perspective, May 2003.)

Research has found no significant attitudinal biases as a result of response rate differences. A study published in 2000, “Consequences of Reducing Nonresponse in a National Telephone Survey” (Keeter, Miller, Kohut, Groves & Presser, POQ 64:125-48), found similar results in surveys with 61 and 36 percent response rates. A follow-up in 2006, “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey” (Keeter, Kennedy, Dimock, Best & Craighill, POQ 70:759-79), based on surveys with 50 and 25 percent response rates, again found “little to suggest that unit nonresponse within the range of response rates obtained seriously threatens the quality of survey estimates.” Still another Pew comparison, in 2012, with a yet lower response rate, had similar results. As far back as 1981, in “Questions & Answers in Attitude Surveys,” Schuman and Presser, describing two samples with different response rates but similar results, reported (p. 332), “Apparently the answers and associations we investigate are largely unrelated to factors affecting these response rate differences.”

Among many other sources, in "The Causes and Consequences of Response Rates in Surveys by the News Media and Government Contractor Survey Research Firms,” in Advances in Telephone Survey Methodology, Chapter 23, Wiley 2007), Holbrook, Krosnick and Pfent reported that “lower response rates seem not to substantially decrease demographic representativeness within the range we examined. This evidence challenges the assumptions that response rates are a key indicator of survey quality.”

Pre-election Polls

Pre-election polling presents particular challenges. As Election Day approaches these polls are most relevant and accurate if conducted among voters. Yet actual voters are an unknown population – one that exists only on (or, with absentees, shortly before) Election Day. Pre-election polls make their best estimate of this population.

Our practice for ABC News is to develop a range of “likely voter” models, employing elements such as self-reported voter registration, intention to vote, attention to the race, past voting, age, respondents’ knowledge of their polling places and political party identification. We evaluate the level of voter turnout produced by these models and diagnose differences across models when they occur.

ABC News has presented detailed evaluations of our tracking polls at polling conferences and in published work (Langer and Merkle 2001; Merkle, Langer and Lambert 2005; also in Public Opinion Polling in a Globalized World, Springer 2008; Langer et al. 2009).