ABC News' polling methodology and standards

The nuts and bolts of our public opinion surveys.

A summary of ABC News polling standards and survey methodology follows.

Standards

ABC News maintains standards for disclosure, validity, reliability and unbiased content in survey research and evaluates surveys being considered for use to establish whether they meet these standards.

On disclosure, in addition to the identities of the research sponsor and field work provider, ABC requires a detailed statement of methodology, the full questionnaire and complete marginal data. Proprietary research is not exempted.

Methodologically, in all or nearly all cases ABC requires a probability-based sample, with high levels of coverage of a credible sampling frame. Non-probability, self-selected or so-called "convenience" samples, including internet opt-in, e-mail, "blast fax," call-in, street intercept and non-probability mail-in samples do not meet ABC standards for validity and reliability.

ABC may accept some probability-based surveys that do not meet its own methodological standards but may recommend cautious use of such data, with qualifying language.

In terms of survey content, ABC examines methodological statements for misleading or false claims, questionnaires for leading or biasing wording or ordering, and analyses and news releases for inaccurate or selective conclusions.

Survey methodology

Full-length surveys for ABC News are produced by Langer Research Associates. In January 2024, Langer Research transitioned these surveys from random-digit-dialed telephone interviewing to data collection via the probability-based Ipsos KnowledgePanel®. ABC News/Ipsos and ABC News/Washington Post/Ipsos, variably, are survey sponsors.

KnowledgePanel members are randomly recruited via address-based sampling (using the latest Delivery Sequence File of the U.S. Postal Service) to participate in surveys. Those who lack internet access are provided with a tablet and internet connection at no cost. Panelists who are selected to participate in a survey are sent a password-protected, one-time log-in. Survey participants receive a token incentive, usually the equivalent of $1 or $2 in value.

To select respondents, Ipsos weights the panel by sex, age, race/ethnicity, education, region, household income, household size, marital status, homeownership status and metropolitan area per the 2023 March Supplement of the Current Population Survey; language dominance per the 2022 American Community Survey (both produced by the U.S. Census Bureau); and 2020 vote participation and vote choice from the 2020 Federal Elections report (using categories for Biden, Trump, other candidates, nonvoter). These selection weights are used as the measure of size in probability-proportional-to-size sample selection, in which panelists with larger selection weights have a larger probability of selection.

The survey are conducted in English and Spanish. Initial nonrespondents receive one email reminder; harder-to-reach respondents receive an additional reminder. In late August 2024, as an example, invitations were sent to 4,335 panelists, resulting in 2,496 completed interviews. In quality control, 47 respondents were removed for skipping half or more of the questions or completing the survey in the fastest 1 percent times.

For more precise analysis, surveys may include oversamples of approximately 100 Black people, 100 Hispanic people and 100 people age 18-29, with these groups scaled to their correct proportion of the population in weighting.

Data typically are weighted by iterative proportional fitting to adjust for gender by age, race/ethnicity, education, census region by metropolitan status, household income (per the CPS), language dominance (per the ACS), 2020 vote and political party identification (per the 2024 National Public Opinion Reference Survey from the Pew Research Center).

These surveys have a design effect due to weighting of approximately 1.1.

Contact us for methodological details on any specific survey.

Other surveys

Methodological details on telephone surveys previously produced for ABC News by Langer Research Associates are available here.

Sampling error

Poll results may deviate from full population values because they rely on a sample rather than a census of the full population. Sampling error can be calculated given probability sampling methods, using the standard formula (at the 95 percent confidence level) of (SQRT(.25/sample size))*1.96, plus adjustment for design effects. There can be other sources of differences in polls, such as question wording and order and systematic noncoverage or selection bias.

As a function of sample size, sampling error is higher for subgroups. ABC News poll reports analyze subgroups of ~100 cases or larger. See a fuller description of sampling error here and Langer Research Associates' online margin-of-error calculator here.

Response rates

A survey's response rate represents its contact rate multiplied by its cooperation rate. Response rates are calculated using sample dispositions. In November 2014, Langer Research Associates posted available sample dispositions for all ABC News and ABC News/Washington Post polls since 1999, and has updated them regularly since.

A higher response rate in and of itself does not ensure greater data quality. In telephone surveys, for example, including business-listed phone numbers improves coverage yet decreases contact rates (and therefore overall response rates). On the other hand, surveys that, for instance, do no within-household selection, or use listed-only samples, will increase their cooperation or contact rates (and therefore response rates), but at the expense of random selection or population coverage (see Langer, Public Perspective, May 2003).

Researchers have not found consistent attitudinal biases as a result of response rate differences. A study published in 2000, "Consequences of Reducing Nonresponse in a National Telephone Survey" (Keeter, Miller, Kohut, Groves & Presser, POQ 64:125-48), found similar results in surveys with 61 and 36 percent response rates. A follow-up in 2006, "Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey" (Keeter, Kennedy, Dimock, Best & Craighill, POQ 70:759-79), based on surveys with 50 and 25 percent response rates, again found "little to suggest that unit nonresponse within the range of response rates obtained seriously threatens the quality of survey estimates." Still another Pew comparison, in 2012, with a yet lower response rate, had similar results. As far back as 1981, in "Questions & Answers in Attitude Surveys," Schuman and Presser, describing two samples with different response rates but similar results, reported (p. 332), "Apparently the answers and associations we investigate are largely unrelated to factors affecting these response rate differences."

Among many other sources, in "The Causes and Consequences of Response Rates in Surveys by the News Media and Government Contractor Survey Research Firms," in Advances in Telephone Survey Methodology, Chapter 23, Wiley 2008), Holbrook, Krosnick and Pfent reported that "lower response rates seem not to substantially decrease demographic representativeness within the range we examined. This evidence challenges the assumptions that response rates are a key indicator of survey quality."

Pre-election polls

Pre-election polling presents particular challenges. As Election Day approaches these polls are most relevant and accurate if conducted among voters. Yet actual voters are an unknown population - one that exists only on or shortly before Election Day. Pre-election polls make their best estimate of this population.

In pre-election vote preference polling for ABC News, Langer Research Associates develops a range of likely voter models, including elements such as self-reported voter registration, intention to vote, attention to the race, past voting, age, and political party identification, among others. Langer Research evaluates voter turnout estimates produced by these models and diagnoses differences across models when they occur.

ABC News has presented detailed evaluations of its election tracking polls at polling conferences and in published work (Langer and Merkle 2001; Merkle, Langer and Lambert 2005; also in Public Opinion Polling in a Globalized World, Springer 2008; Langer et al. 2009).