A summary of ABC News polling standards and methodology follows.
Standards
Langer Research Associates, primary polling provider to ABC News, advises the news division on standards for disclosure, validity, reliability and unbiased content in survey research and evaluates data when requested to establish whether it meets these standards.
On disclosure, in addition to the identities of the research sponsor and field work provider, we require a detailed statement of methodology, the full questionnaire and complete marginal data. If any of these are lacking, we recommend against reporting the results. Proprietary research is not exempted.
Methodologically, in all or nearly all cases we require a probability-based sample, with high levels of coverage of a credible sampling frame. Non-probability, self-selected or so-called “convenience” samples, including internet opt-in, e-mail, “blast fax,” call-in, street intercept and non-probability mail-in samples do not meet our standards for validity and reliability, and we recommend against reporting them.
We do accept some probability-based surveys that do not meet our own methodological standards – in terms of within-household respondent selection, for example – but may recommend cautious use of such data, with qualifying language. We recommend against reporting others, such as pre-recorded autodialed surveys, even when a random-digit dialed telephone sample is employed.
Langer Research Associates has published briefing papers summarizing recent research on non-probability sampling, including opt-in online surveys, the use of social media (and related approaches) to estimate public opinion and challenges in the use of “big data.” We’ve also commented on non-probability sampling in the Fall 2013 issue of the Journal of Survey Statistics and Methodology and in a 2012 presentation at the annual conference of the American Association for Public Opinion Research.
In terms of survey content, we examine methodological statements for misleading or false claims, questionnaires for leading or biasing wording or ordering, and analyses and news releases for inaccurate or selective conclusions.
In addition to recommending against reporting surveys that do not meet appropriate standards for validity and reliability, we promote and strongly encourage the reporting of good-quality polls that break new ground in opinion research.
Sample Design
ABC and The Washington Post direct the methodological approach to full-length ABC/Post polls in consultation with our field work provider for these surveys, Abt Associates of Rockville, MD. Occasional shorter-length ABC/Post polls are conducted via an omnibus survey produced by SSRS of Glen Mills, PA; see methodological details here.
Before October 2008, full-length ABC/Post polls were conducted by calling samples of landline telephone numbers only. From October 2008 through June 2015 we added cell phone interviews via a non-overlapping dual-frame sample design, with separate sampling frames for landline and cell phone-only respondents, as detailed in this paper. The cell phone-only proportion, based on data from the National Health Interview Survey, grew from 100 out of 1,000 interviews to 335 per 1,000 during this period.
The non-overlapping design served well, especially through a time in which cell phone interviews were much costlier than landline interviews. However, the cost differential has flattened over time and the incidence of cell phone use has continued to grow, producing a shortfall in the number of young adults reached via this design. As a result, in July 2015 we adopted an overlapping dual frame sample design, in which cell phone respondents are interviewed regardless of whether or not they also have a landline.
The proportion of cell phone interviews again is driven by the NHIS estimate of cell phone-only respondents; to achieve our target, from July 2015 through February 2020 65 percent of all interviews were conducted by cell phone, with the remaining 35 percent interviewed via landline. We shifted this proportion to 75/25 percent in March 2020.
Sampling
Cell phone and landline samples are produced by Survey Sampling Inc. of Shelton, Conn. For landline interviews, SSI selects a sample of landline households in the continental United States via random digit dialing, in which all landline telephone numbers, listed and unlisted, have an equal probability of selection. Landline numbers are drawn proportionate to their estimated distribution in the country’s nine Census divisions.
SSI starts with a database of all listed landline telephone numbers, updated on a four- to six-week rolling basis, 25 percent of listings at a time. This database of directory-listed numbers is then used to determine all active blocks – as we define it, contiguous groups of 100 phone numbers for which more than one residential number is listed. All possible numbers in active blocks are added to the random digit database.
Until 2005, ABC News followed the industry norm of excluding all listed business numbers (compiled from sources such as Yellow Pages directories and the Dun and Bradstreet Business Data database) from the sample. However, an ABC-led study (Merkle, Langer, Cohen, Piekarski, Benford & Lambert, 2009, Public Opinion Quarterly) found that this “cleaning” process excludes respondents who have home-based business-listed phones and no other lines at home on which they take calls, creating 3 percent noncoverage of eligible households with no offsetting gains in productivity. As a result of this evaluation, we do not exclude listed business numbers from our landline sample, with the exception of those in business-only blocks or exchanges.
Each telephone exchange in the landline sample is assigned to the county where it’s most prevalent. In the first stage of selection, the database is sorted by state and county, and the number of telephone numbers to be sampled within each county is determined using systematic sampling procedures from a random start, such that each county is assigned a sample size proportional to its share of possible numbers. In the second stage of selection, telephone numbers are sorted within county by area code, exchange and active block, and using systematic sampling procedures from a random start, individual phone numbers within each county are selected. The sampled phone numbers are pre-dialed via a non-ringing auto-dialer to reduce dialing of non-working numbers.
For the cell phone sample, SSI begins with a monthly listing of every existing telephone area code and exchange. About half of these are pooled by their producers in contiguous groups of 10 100-block phone numbers, or 1,000-blocks, with information including whether each pooled 1,000-block does or does not include cell phone numbers, either solely or on a shared basis with landline numbers.
All cell-inclusive 1,000-blocks are included in the cell phone sample. For numbers that are not 1,000-block pooled, cell phone service information is available at the exchange level only; therefore all numbers in those exchanges also are included. All numbers used in cell phone sampling are then handled at the 100-block level. Given the absence of any cell phone directory, all 100-blocks in dedicated wireless exchanges and 1,000-blocks used for sampling purposes are considered active.
For exchanges or 1,000-blocks that have been classified by their carrier as providing both landline and wireless service, each 100-block is compared to the database of landline 100-blocks; 100-blocks that appear on the landline frame are removed from the wireless frame and 100-blocks with no directory-listed numbers are retained. This ensures that the wireless frame and list-assisted RDD frame are mutually exclusive while still providing coverage of prefixes and 1,000-blocks that are classified as including both landline and wireless service.
Each 100-block is assigned to a county based on the billing coordinates of the exchange. The database is sorted by county code, carrier name and 100-block. A sampling interval is determined by dividing the universe of eligible 100-blocks by the desired sample size. From a random start within the first sampling interval, a systematic nth selection of 100-blocks is performed and a 2-digit random number between 00 and 99 is appended to each selected 100-block stem.
Interviewing
In each sample, phone numbers are released for interviewing in replicates by Census region (cell) or division (landline) to allow for sample control. Numbers are called multiple times during the field period in multi-night polls; the standard for full-length ABC/Post polls is a minimum of six calls to each number. Interviews are conducted via a computer-assisted telephone interviewing (CATI) system. Abt SRBI’s professional interviewers, and their supervisors, are extensively trained in interviewing practices, including techniques designed to achieve the highest possible respondent cooperation.
For landline respondents, interviewers ask to speak with the youngest male or youngest female at home. Cell-only respondents are screened for age eligibility (18+). Cell-only respondents are not offered compensation, but a reimbursement check is offered if use of minutes is raised as an objection. Cell sample respondents’ place of residence is checked and their Census region adjusted accordingly if necessary.
As of April 2013, Spanish-language interviewing was added to full-length ABC/Post polls for respondents who indicate a preference to be interviewed in Spanish. Spanish-language interviewing in SSRS omnibus surveys began in October 2009.
Coverage
Our sample excludes adults who don’t have a cell or landline phone (3.2 percent, per the NHIS); who don’t speak English or Spanish (1.5 percent, per the American Community Survey conducted by the U.S. Census Bureau); and who live in institutional group facilities where individual telephone access is disallowed (chiefly, adult correctional facilities), about 0.9 percent. Allowing for some overlap of these groups, the frame covers approximately 95 percent of the target population, U.S. adults age 18+.
Weighting
Data are adjusted to account for the greater probability of respondents who have both a cell and landline phone, compared with those who are cell-only or landline-only. The data then are weighted using demographic information from the U.S. Census and NHIS to adjust for variance from population values. Weights may include average partisan self-identification in current and recent ABC/Post data, based on a standardized rule.
Until 2008 we used cell-based weighting, in which respondents were classified into one of 48 or 32 cells (depending on sample size) based on their age, race, sex and education; weights were assigned so the proportion in each cell matched the Census Bureau’s most recent Current Population Survey data. To achieve greater consistency and reduce the chance of large weights, in January 2008 we adopted iterative weighting, also known as raking or rim weighting, in which the sample is weighted sequentially to Census targets one variable at a time, continuing until the optimum distribution is achieved.
From October 2008 to June 2015, data were post-stratified to Census region by sample type; rim weights then were calculated using Census parameters for age, race/ethnicity, sex and education. The precision of race/ethnicity weights was enhanced in April 2013. In July 2015, post-stratification by sample type was discontinued and Census region and phone service (landline only, dual service and cell-only) were added to the rim weighting variables. Weights are capped at lows of 0.2 and highs of 6.
Surveys commonly are weighted to the number of telephone lines in each respondent’s home to adjust for the higher probability of selection of multiple-line households. ABC News has studied the effect of such weighting (Merkle & Langer, Public Opinion Quarterly, Spring 2008) concluding that it carries the risk of distortion, and, when done properly, has no meaningful impact on the data. ABC News polls therefore are not weighted to the number of household phone lines.
The design effect due to weighting describes the overall impact of sample weights and is used in calculating the survey’s margin of sampling error. ABC/Post poll design effects averaged 1.25 in 2016, rising to an average of 1.38 in 2019. This increase prompted our decision to boost our proportion of cell phone interviews to 75 percent, improving the representativeness of our samples. Our design effects as of March 2020 were 1.21.
Sampling Error
Poll results may deviate from full population values because they rely on a sample rather than a census of the full population. Sampling error can be calculated when probability sampling methods, such as those described here, are employed, using the standard formula (at the 95 percent confidence level) of (SQRT(.25/sample size))*1.96, plus adjustment for design effects. There can be other sources of differences in polls, such as question wording and order and systematic noncoverage or selection bias.
As a function of sample size, sampling error is higher for subgroups. We analyze subgroups only as small as 100 cases (or very near it). See our fuller description of sampling error here and our online margin-of-error calculator here.
Response Rates
A survey’s response rates represents its contact rate (the number of households reached out of total telephone numbers dialed, excluding an estimate of nonworking and business numbers) multiplied by its cooperation rate (the number of individuals who complete interviews out of total households reached).
Response rates are calculated using sample dispositions. In November 2014 we posted online available sample dispositions for all ABC News and ABC News/Washington Post polls since 1999, and have updated them regularly since.
It cannot be assumed that a higher response rate in and of itself ensures greater data integrity. By including business-listed numbers, for instance, we increase coverage yet decrease contact rates (and therefore overall response rates). On the other hand, surveys that, for instance, do no within-household selection, or use listed-only samples, will increase their cooperation or contact rates (and therefore response rates), but at the expense of random selection or population coverage. (See Langer, Public Perspective, May 2003.)
Research has found no significant attitudinal biases as a result of response rate differences. A study published in 2000, “Consequences of Reducing Nonresponse in a National Telephone Survey” (Keeter, Miller, Kohut, Groves & Presser, POQ 64:125-48), found similar results in surveys with 61 and 36 percent response rates. A follow-up in 2006, “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey” (Keeter, Kennedy, Dimock, Best & Craighill, POQ 70:759-79), based on surveys with 50 and 25 percent response rates, again found “little to suggest that unit nonresponse within the range of response rates obtained seriously threatens the quality of survey estimates.” Still another Pew comparison, in 2012, with a yet lower response rate, had similar results. As far back as 1981, in “Questions & Answers in Attitude Surveys,” Schuman and Presser, describing two samples with different response rates but similar results, reported (p. 332), “Apparently the answers and associations we investigate are largely unrelated to factors affecting these response rate differences.”
Among many other sources, in "The Causes and Consequences of Response Rates in Surveys by the News Media and Government Contractor Survey Research Firms,” in Advances in Telephone Survey Methodology, Chapter 23, Wiley 2007), Holbrook, Krosnick and Pfent reported that “lower response rates seem not to substantially decrease demographic representativeness within the range we examined. This evidence challenges the assumptions that response rates are a key indicator of survey quality.”
Pre-election Polls
Pre-election polling presents particular challenges. As Election Day approaches these polls are most relevant and accurate if conducted among voters. Yet actual voters are an unknown population – one that exists only on (or, with absentees, shortly before) Election Day. Pre-election polls make their best estimate of this population.
Our practice for ABC News is to develop a range of “likely voter” models, employing elements such as self-reported voter registration, intention to vote, attention to the race, past voting, age, respondents’ knowledge of their polling places and political party identification. We evaluate the level of voter turnout produced by these models and diagnose differences across models when they occur.
ABC News has presented detailed evaluations of our tracking polls at polling conferences and in published work (Langer and Merkle 2001; Merkle, Langer and Lambert 2005; also in Public Opinion Polling in a Globalized World, Springer 2008; Langer et al. 2009).
ABC News/Washington Post 2020 Battleground State Polls
ABC News/Washington Post 2020 battleground state surveys were based on 775 to 1,100 random-sample telephone interviews with adults in selected states. The surveys were designed by Langer Research Associates for ABC News in conjunction with The Washington Post, with sampling, fieldwork and data processing by Abt Associates.
In sampling, phone numbers were drawn randomly from cell phone area codes and exchanges assigned to cell service provider billing centers in the state, a database of out-of-state cell phone numbers whose users are billed within the state and landlines with in-state area codes and exchanges. Cell/landline proportions were per the most recent estimates from the U.S. Centers for Disease Control and Prevention’s National Health Interview Survey. In-state/out-of-state cell proportions were per Marken et al. in Survey Practice (2019). Samples were composed as follows:
- Minnesota (Sept. 8-13): Seventy-four percent cell phone numbers, 65% in-state and 9% out-of-state; 26% landline numbers.
- Wisconsin (Sept. 8-13): Seventy-one percent cell phone numbers, 62% in-state and 9% out-of-state; 29% landline numbers.
- Arizona (Sept. 15-20): Eighty percent cell phone numbers, 67% in-state and 13% out-of-state; 20% landline numbers.
- Florida (Sept. 15-20): Seventy-nine percent cell phone numbers, 68% in-state and 11% out-of-state; 21% landline numbers.
- Pennsylvania (Sept. 21-26): Fifty-three percent cell phone numbers, 48% in-state and 5% out-of-state; 47% landline numbers
- North Carolina (Oct. 12-17): Fifty-four percent cell phone numbers, 48% in-state and 6% out-of-state; 46% landline numbers
- Michigan (Oct. 20-25): Seventy-four percent cell phone numbers, 68% in-state and 6% out-of-state; 26% landline numbers
- Wisconsin (Oct. 20-25): Seventy-one percent cell phone numbers, 63% in-state and 8% out-of-state; 29% landline numbers
- Florida (Oct. 24-29): Seventy-nine percent cell phone numbers, 69% in-state and 11% out-of-state; 21% landline numbers
- Pennsylvania (Oct. 24-29): Sixty-four percent cell phone numbers, 57% in-state and 7% out-of-state; 36% landline numbers
Sample frames were sorted by state geopolitical regions. As with ABC/Post national polls, a base weight (frame overlap adjustment) was applied - 1.0 for landline-only or cell-only respondents and 0.5 for dual cell and landline users. Data then were weighted via iterative proportional fitting to the U.S. Census Bureau’s one-year 2018 American Community Survey benchmarks for sex, region, education, age, race/ethnicity and race x education, phone service per NHIS and, in the Arizona and Florida samples, urbanicity. For missing data, a weighting cell was set to 1% for respondents who did not answer a given question used in weighting, and the substantive categories were adjusted by multiplying the ACS benchmarks by 0.99. Weights were trimmed to a range of 0.2 to 6.0, again as with ABC/Post national polls.
Up to its final disposition, each sampled phone number received a minimum of five dialings, at different times of day on different days. Interviews were carried out by professional live interviewers employed by Abt Associates, working from Abt call centers in Fort Myers, Florida; Charleston, West Virginia; and McAllen, Texas; or from their homes, under full remote supervision. Cell respondents were screened for age 18+ and their state of residence is confirmed. Landline respondents were selected asking for the youngest male or female household member at home, age 18+. Interviews were conducted in English in all states, with additional Spanish interviews in Arizona and Florida.