Polling isn't broken, but pollsters still face Trump-era challenges

Pollsters are trying to reach more Republicans and gauge turnout in 2024.

May 30, 2024, 3:18 PM

Every election presents different challenges to pollsters. In the past few years, such obstacles have included declining response rates to phone polling and among some Republican or conservative respondents — a sampling issue — and trouble identifying the likely electorate in the face of high turnout — a weighting issue. These difficulties helped contribute to larger-than-average polling errors in the 2020 presidential election, four years after pollsters also had a worse-than-typical year in 2016 (although for primarily different reasons). Despite enjoying notably more accurate results in the 2018 and 2022 midterms, pollsters this year are once again contending with a common denominator of the 2016 and 2020 cycles: now-former President Donald Trump, who is this year's presumptive GOP presidential nominee.

With the coming election serving as an unavoidable backdrop, the polling community met earlier this month at the annual conference of the American Association for Public Opinion Research to discuss and share best practices for more effectively gathering representative samples and measuring public opinion. Many presenters tackled the issue of response rates, discussing material ways to bring in a wider array of respondents — such as using mixed modes to contact people by phone, text message, postcard or even QR code. And in the face of a third Trump candidacy, another major topic of conversation was how pollsters have turned to weighting their samples at least in part by the 2020 vote preferences of respondents — one way to try to correct for fewer responses from Republican-leaning voters.

Still, different sampling and weighting techniques featured trade-offs and offered no silver bullets. As we look ahead to a November rematch between Trump and President Joe Biden, pollsters are under pressure to address the problems they faced in 2020 as they aim to hit a moving target amid great public scrutiny. "We're trying to measure very small shifts in people's attitudes and opinions about the candidates and the election and the state of the country," said Cameron McPhee, chief methodologist for SSRS, a nonpartisan pollster who works with CNN, among others, on election polling. "And then people are latching on to what you find in a way that you don't see in any other public opinion space."

Sampling: Meet people where they are

One of the conference's consistent refrains was to "meet people where they are" when trying to reach potential respondents — that is, by using different modes of communication to contact different groups of people, based on their personal preferences. "There's no way anymore to get a representative sample of the U.S. population or the voters from a single mode," McPhee observed. "There are too many technologies out there. There are too many beliefs and preferences and attitudes and feelings about responding in general and responding by different modes." Ensuring no group is underrepresented because of low response rates is critical for pollsters to gather a representative sample of the population they're interested in — such as all voters in a given state — particularly if potentially underrepresented groups hold notably different views from those who are overrepresented.

McPhee told me the industry has moved toward agreement that multi-mode approaches are the best way to get a more representative sample. She stressed that it's not that one mode is better, but rather a combination is "better than the sum of its parts." For instance, one SSRS survey experiment saw improved response rates for state-level surveys that recruited respondents by various means, including postcard or SMS text message, and gave respondents six potential access points to respond: URL, QR code (directing them to the survey), text, email, a phone number for respondents to call (inbound dialing) and SSRS reaching out to them by phone (outbound dialing).

Texting in particular has become a common means to engage respondents across all demographic and partisan groups, often as part of a mixed-mode approach. Kevin Collins is the co-founder of Survey 160, a firm that focuses on SMS text-based surveys in its work with Democratic pollsters and nonpartisan organizations, largely via text-to-web sampling (text messages that link to a survey on a web browser) or live interviews over text. "We do text message surveys, but really what we believe in is mixed-mode surveys," Collins told me. "Texting offers a very important additive benefit and cost savings over phones." The firm ran an experiment while polling Kentucky's 2023 gubernatorial election and found that its weighted sample's vote preferences were much closer to the election's final outcome when it mainly texted or used interactive voice response calls with younger respondents while prioritizing live phone interviews for older ones.

Though no demographic or political group is close to absolute on a preferred method, pollsters have uncovered some patterns in which different groups are more likely to answer polls using different modes. SSRS and other survey houses, such as the Pew Research Center, found that using physical, paper-based approaches like postcards for recruitment and poll completion actually improved response rates among more Republican or conservative respondents, potentially because they seemed to find those modes more trustworthy. By comparison, those answering by web tended to be more Democratic-leaning. Younger respondents, including young conservatives, were more likely than other age brackets to use QR codes in SSRS's polling. Multiple firms presented findings that some voters of color and those with lower educational attainment and income levels were comparatively more likely to respond by phone.

Of course, multi-mode approaches present other challenges as well. For instance, Collins emphasized that establishing a polling firm's credibility in its text message outreach is a key consideration in text-to-web surveys. "People are reasonably cautious about clicking on links from phone numbers who have contacted them out of the blue," he said. (538 is familiar with this problem: We often receive emails from readers asking if this type of survey outreach comes from a legitimate source.)

Using multiple modes also creates certain logistical and statistical concerns. Pollsters have to make sure that respondents encounter their survey in a similar form regardless of what mode they use to take it, since question presentation can influence how respondents answer. Then, they have to decide how to weight respondents by mode, as there can be different errors for each method. All of this has had consequences for pre-survey and post-survey processes as firms try to balance art and science within polling.

As all of these complicating factors suggest, there's no silver bullet for improving a sample. McPhee told me that using different modes seems to improve the representativeness of survey samples, but how it does that is not consistent across different states and studies, making it hard to prescribe any particular method as a guaranteed solution for addressing a sampling problem. For example, she noted it wasn't always clear if texting was bringing in more young people, or more Democrats or Republicans, to the sample because of different correlations between age and partisanship.

In the same vein, one of the conference's more interesting panels discussed methods pollsters had tried to increase response rates among more Republican-leaning voters. Ipsos thought it might have found a way to recruit more GOP respondents by adding a sentence to its invitations that emphasized the firm wanted views that aren't always represented. However, a replication experiment using this approach didn't really pan out. The University of Pennsylvania tested whether using a more "patriotic" invitation for an address-based sample that featured images of the U.S. flag would bring in more Republican respondents. At first blush, the experiment seemed to work because it produced a more representative sample in terms of partisanship. But it turned out that the patriotic mailers caused response rates to drop across all groups — the opposite of what a pollster wants — and it was only more balanced because Republican responses fell by a smaller amount.

Weighting: The known unknown of who will vote

Beyond gathering as representative a sample as possible, election polling presents another specific challenge: trying to gauge what the electorate will look like in a given election cycle. This raises questions about how to appropriately weight sample data — that is, how to adjust survey datasets to account for differences between the sample population and the overall population — to reflect who is most likely to actually vote in November.

In its post-2020 report, AAPOR cited nonresponse bias as one of the potential reasons for the cycle's larger polling error, possibly because Democrats were systematically more likely to respond to pollsters than Republicans, and because the Republicans who did answer may have differed in important ways from the Republicans who didn't. The 2020 election's record-setting turnout amid the COVID-19 pandemic also complicated pollsters' efforts to get a read on just who was most likely to vote, as states expanded mail-in voting and Trump's presidency likely motivated higher turnout among both supporters and opponents.

In light of all this, some pollsters like SSRS are weighting their sample data this cycle by accounting for respondents' past votes (or at least who they recall voting for) to better ensure that the political makeup of survey samples more closely matches reality. "If there is a nonresponse bias that is specifically tied to your vote choice, it's like the worst kind of nonresponse bias for any survey, right?" McPhee said. "This recalled vote thing is our best way of fixing that. The people that voted for Trump in 2020 and are responding to our surveys at a lower rate than the people that voted for Biden need to be weighted up."

But that's sometimes easier said than done; while pollsters can use voter files to see if someone voted in a previous election, they can't see who that person voted for and have to rely on self-reporting. "I think weighting on past vote has a lot of intuitive appeal and a lot of practical difficulty," Collins observed. "It's been known for a long time that not everyone recalls this perfectly. Some people recall voting, some don't recall at all. And some people who recall voting are more likely to recall voting for the winner."

With such concerns in mind, McPhee's SSRS colleagues have been tracking vote recall among a panel of survey respondents since early 2022. They've found that Biden and Trump voters' recall is pretty stable over time (90 percent or more consistently give their initial choice). However, those who recall voting for another option (say, a third-party candidate) have been much less consistent, such that their recalled vote has served as something akin to nonresponse — that is, they may have voted for Biden or Trump but are saying otherwise. SSRS then conducted an experiment that found that listing the specific names of the 2020 Libertarian and Green Party nominees (Jo Jorgensen and Howie Hawkins, respectively) helped reduce that discrepancy — this approach may have jarred some respondents into realizing they hadn't actually voted for those candidates — and yielded a sample that more closely aligned with actual 2020 election results.

But even the most accurate weighting by past votes can't answer the ever-present question of which voters will turn out this year. How to handle the many voters who cycle in and out of the electorate is no minor consideration: As Nate Cohn at the New York Times recently noted, historically, roughly one-quarter of presidential election voters didn't vote in the last presidential contest. That churn results in part from younger voters entering the electorate and older ones leaving it, but also reflects the participation of less engaged voters who bounce in and out of the electorate from year to year. "Good pollsters have to be thoughtful about how they account for those people," McPhee said, as weighting by recalled vote isn't an available option with new voters.

"Asking who is a likely voter is the wrong question," Collins said. "It's easy to know who the likely voters are — they're the people who regularly vote. But the challenge is identifying which unlikely voters will end up voting." And he stressed that this question becomes much more difficult if likely and unlikely voters have divergent vote preferences — as some national surveys have suggested is the case this year, finding that registered voters who didn't vote in 2020 are slightly more Trump-leaning in 2024 surveys than those who did vote in 2020. That makes a pollster's estimates of just who will show up on Election Day even more important in determining the outcomes of their polls.

McPhee told me pollsters use a combination of model- and question-based approaches to identify likely voters. For example, SSRS uses state voter files to build a model for turnout based on each respondent's past vote history. The more traditional question-based approach is to directly ask respondents how likely they are to vote, but pollsters can improve upon that by also asking how they intend to vote. For instance, she told me, someone saying they plan to vote by mail two days before the election may not actually be that likely to vote. On the other hand, someone's responses to these questions might indicate that they're a very eager voter despite the lack of a past voting history.

Critically, turnout is not just a question of overall participation, but also which groups turn out at higher or lower levels. As 538's Swing-O-Matic interactive shows, lower Black turnout could be costly for Biden, while lower turnout among white voters without a four-year college degree could hurt Trump's chances. For that reason, McPhee said, the most important thing for pollsters to be transparent about in this election cycle is how sensitive their estimates are to who they expect to turn out — whether a pollster's different turnout scenarios consistently show one candidate leading, or yield a more mixed result. Collins agreed: "The aggregate level of turnout matters a lot less for polling accuracy than who's turning out to vote."

***

If there's one constant in polling, it's that change is necessary. Tellingly, the theme of one conference roundtable was "adapt or die," something pollsters have been doing to adjust to changes in technology and society since the dawn of public opinion surveys in the 1930s. Pollsters have identified many challenges that have consistently arisen in recent years and made adjustments to overcome the hurdles in their path, but how their methodological changes affect their estimates relative to actual vote tallies won't become apparent until after the election. After all, it's good to remember that polling error has historically varied year to year in scope and direction — that is, whether the polls were more Democratic or Republican than the election result. This time around, hindsight may be 2020, but 2024 won't necessarily fit the bill.