ABC News' polling methodology and standards

The nuts and bolts of our public opinion surveys.

ByABC News
January 23, 2024, 1:00 PM

A summary of ABC News polling standards and survey methodology follows.

Standards

ABC News maintains standards for disclosure, validity, reliability and unbiased content in survey research and evaluates surveys being considered for use to establish whether they meet these standards.

On disclosure, in addition to the identities of the research sponsor and field work provider, ABC requires a detailed statement of methodology, the full questionnaire and complete marginal data. Proprietary research is not exempted.

Methodologically, in all or nearly all cases ABC requires a probability-based sample, with high levels of coverage of a credible sampling frame. Non-probability, self-selected or so-called "convenience" samples, including internet opt-in, e-mail, "blast fax," call-in, street intercept and non-probability mail-in samples do not meet ABC standards for validity and reliability.

ABC may accept some probability-based surveys that do not meet its own methodological standards but may recommend cautious use of such data, with qualifying language.

In terms of survey content, ABC examines methodological statements for misleading or false claims, questionnaires for leading or biasing wording or ordering, and analyses and news releases for inaccurate or selective conclusions.

Survey methodology

January 2024 saw the first production of a full-length ABC News/Ipsos survey, with design, management and analysis by Langer Research Associates and fieldwork by Ipsos using its probability-based online KnowledgePanel(r).

KnowledgePanel members are randomly recruited via address-based sampling (using the latest Delivery Sequence File of the U.S. Postal Service) to participate in surveys. Those who lack internet access are provided with a tablet and internet connection at no cost.

Panelists who are selected to participate in a survey are sent a password-protected, one-time log-in. Survey participants receive a token incentive, usually the equivalent of $1 or $2 in value.

In the January survey, to select respondents, Ipsos weighted the panel by sex, age, race/ethnicity, education, region, household income, homeownership status and metropolitan area per the 2023 March Supplement of the Current Population Survey; language dominance per the 2022 American Community Survey (both produced by the U.S. Census Bureau); and 2020 vote participation and vote choice from the 2020 Federal Elections report (using categories for Biden, Trump, other candidates, nonvoter). These selection weights were used as the measure of size in probability-proportional-to-size sample selection, in which panelists with larger selection weights have a larger probability of selection.

The survey was conducted in English and Spanish. Initial nonrespondents received one email reminder. Invitations were sent to 3,636 panelists, resulting in 2,228 completed interviews.

In quality control, 27 respondents were removed for completing the survey in the 1 percent fastest times by leaned party ID path or for skipping all eligible questions.

The data were weighted to adjust for gender by age, race/ethnicity, education, census region, metropolitan status, household income (per the CPS), language dominance (per the ACS) and political party identification (per recent ABC News/Washington Post surveys).

Weighting categories were:

• Sex (male, female) by age (18-29, 30-44, 45-59 and 60+)• Race/Hispanic ethnicity (white non-Hispanic, Black non-Hispanic, other or 2+ races non-Hispanic, Hispanic)• Education (high school graduate or less, some college, bachelor's and beyond)• Census region (Northeast, Midwest, South, West)• Metropolitan status (metro, non-metro)• Household income (Under $25,000, $25,000-$49,999, $50,000-$74,999, $75,000-$99,999, $100,000-$149,999, $150,000+)• Language dominance (English dominant, bilingual, Spanish dominant, non-Hispanic)• Party ID (Democrat, Republican, independent, something else, don't know/skipped)

The survey had a design effect due to weighting of 1.2.

The questionnaire was adjusted as necessary given the mode change from previous ABC News/Washington Post telephone surveys. In trend questions with more than negligible "no opinion" responses by telephone, online respondents were given an explicit "No opinion" option; those who selected it were asked whether and how they leaned. Nonetheless, differences due to mode effects cannot be ruled out.

Other surveys

Methodological details on telephone surveys previously produced for ABC News by Langer Research Associates are available here.

Sampling error

Poll results may deviate from full population values because they rely on a sample rather than a census of the full population. Sampling error can be calculated given probability sampling methods, using the standard formula (at the 95 percent confidence level) of (SQRT(.25/sample size))*1.96, plus adjustment for design effects. There can be other sources of differences in polls, such as question wording and order and systematic noncoverage or selection bias.

As a function of sample size, sampling error is higher for subgroups. ABC News poll reports analyze subgroups of ~100 cases or larger. See a fuller description of sampling error here and Langer Research Associates' online margin-of-error calculator here.

Response rates

A survey's response rate represents its contact rate multiplied by its cooperation rate. Response rates are calculated using sample dispositions. In November 2014, Langer Research Associates posted available sample dispositions for all ABC News and ABC News/Washington Post polls since 1999, and has updated them regularly since.

A higher response rate in and of itself does not ensure greater data quality. In telephone surveys, for example, including business-listed phone numbers improves coverage yet decreases contact rates (and therefore overall response rates). On the other hand, surveys that, for instance, do no within-household selection, or use listed-only samples, will increase their cooperation or contact rates (and therefore response rates), but at the expense of random selection or population coverage (see Langer, Public Perspective, May 2003).

Researchers have not found consistent attitudinal biases as a result of response rate differences. A study published in 2000, "Consequences of Reducing Nonresponse in a National Telephone Survey" (Keeter, Miller, Kohut, Groves & Presser, POQ 64:125-48), found similar results in surveys with 61 and 36 percent response rates. A follow-up in 2006, "Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey" (Keeter, Kennedy, Dimock, Best & Craighill, POQ 70:759-79), based on surveys with 50 and 25 percent response rates, again found "little to suggest that unit nonresponse within the range of response rates obtained seriously threatens the quality of survey estimates." Still another Pew comparison, in 2012, with a yet lower response rate, had similar results. As far back as 1981, in "Questions & Answers in Attitude Surveys," Schuman and Presser, describing two samples with different response rates but similar results, reported (p. 332), "Apparently the answers and associations we investigate are largely unrelated to factors affecting these response rate differences."

Among many other sources, in "The Causes and Consequences of Response Rates in Surveys by the News Media and Government Contractor Survey Research Firms," in Advances in Telephone Survey Methodology, Chapter 23, Wiley 2008), Holbrook, Krosnick and Pfent reported that "lower response rates seem not to substantially decrease demographic representativeness within the range we examined. This evidence challenges the assumptions that response rates are a key indicator of survey quality."

Pre-election polling presents particular challenges. As Election Day approaches these polls are most relevant and accurate if conducted among voters. Yet actual voters are an unknown population - one that exists only on or shortly before Election Day. Pre-election polls make their best estimate of this population.

In pre-election vote preference polling for ABC News, Langer Research Associates develops a range of likely voter models, including elements such as self-reported voter registration, intention to vote, attention to the race, past voting, age, and political party identification, among others. Langer Research evaluates voter turnout estimates produced by these models and diagnoses differences across models when they occur.

ABC News has presented detailed evaluations of its election tracking polls at polling conferences and in published work (Langer and Merkle 2001; Merkle, Langer and Lambert 2005; also in Public Opinion Polling in a Globalized World, Springer 2008; Langer et al. 2009).