What are the best pollsters in America?

538's new pollster ratings quantify each firm's error, bias and transparency.

With former President Donald Trump’s recent Republican presidential primary victories in Iowa and New Hampshire, we can now say with near certainty that the 2024 general election will feature a rematch between President Joe Biden and his predecessor. Both the stakes of the election and the odds of the outcome are of great importance, and people will be paying them a lot of attention over the next 10 months. And if social media conversations and news coverage about the primary are any indication, public opinion polls will feature very heavily in the discourse about the general election.

In fact, we are due, by my estimation, to be inundated with around 1,500 polls of elections for president, senator, governor and the U.S. House by November. For poll-readers trying to analyze each one, it will feel like drinking from a firehose. Each poll brings with it an array of questions about trust and reliability. For instance, when two polls disagree, which do we trust more? And when we’re averaging polls together (538’s preferred solution to the firehose problem), how can we quantify our preference in a way that is statistically valid and leads to the most accurate models?

Enter 538's pollster ratings, which grade each polling organization based on its historical accuracy and methodological transparency. These ratings have long been an ingredient in 538's polling averages and election models,* but we've rebuilt them from the ground up to account for a changing landscape of polling bias, uncertainty and difficulty.

How we grade pollsters

If you're interested in all the gory details of how we calculate pollster ratings, please peruse our detailed methodological write-up at your leisure. But if all you need is a top-level overview, just know that our ratings reflect firms' scores on two dimensions of pollster quality.

The first is empirical accuracy, as measured by the average error and average bias of a pollster's polls. We quantify error by calculating how close a pollster's surveys land to actual election results, adjusting for how difficult each contest is to poll. Bias is just error that accounts for whether a pollster systematically overestimates Republicans or Democrats. We average our final error and bias values together into one measure of overall accuracy called POLLSCORE, a silly backronym for "Predictive Optimization of Latent skill Level in Surveys, Considering Overall Record, Empirically." POLLSCORE tells us whether a pollster is more accurate than a theoretical replacement-level pollster that polled all the same contests. Negative POLLSCOREs are better and mean that a pollster has less error and bias than this theoretical alternative.

But empirical accuracy only gets us so far. Some pollsters are accurate, but they don't reveal much about how they actually do their work. This can range from small things, like not releasing sample sizes for key subgroups, to big problems, such as not disclosing the partisan sponsors of their research. We have found that pollsters that hide such information tend to be less accurate in future elections, even if they have good historical empirical records.

So we now also score firms based on their methodological transparency. To do this, we have quantified how much information each pollster released about every poll in our archive since 2016. (Shoutout to our fantastic research team, Mary Radcliffe and Cooper Burton, for undertaking this heroic task.) Each poll gets 1 point for each of 10 criteria it meets, ranging from whether it published the actual question wording of its poll or listed sample sizes for key subgroups. We give each pollster a Transparency Score based on the weighted average of the scores of its individual polls and whether it shares data with the Roper Center for Public Opinion Research at Cornell University or is a member of the American Association for Public Opinion Research's Transparency Initiative.

Finally, we combine each pollster's POLLSCORE and Transparency Score into a star rating between 0.5 and 3. Only the best of the best will get 3.0 stars; these are pollsters who score in the 99th percentile or better for both accuracy and transparency. Pollsters scoring between 2.8 and 3.0 are still very good — just not the best of the best. Most pollsters score between a 1.9 and 2.8, representing what we see as America's core block of good pollsters. Pollsters between 1.5 and 1.9 stars are decent, but they typically score poorly on either accuracy or transparency. Generally, we are very skeptical of pollsters that get less than 1 star, as they both have poor empirical records and share comparatively little about their methodology. A 0.5-star rating — the bare minimum — is reserved for pollsters that have records of severe error or bias or are disclosing only the bare minimum about their polls.

Why bias, instead of error alone?

Eagle-eyed readers (and pollster-rating superfans) may have noticed two key differences from 538's past pollster ratings. The first is that the ratings incorporate not just polling error, but polling bias. We think both metrics are important, as demonstrated by this simple illustration.

Imagine two polling firms: Pollster A and Pollster B. Pollster A released three surveys of the presidential election in 2020. They showed now-President Joe Biden beating then-President Donald Trump in the national popular vote by 6, 7 and 8 percentage points. Given that Biden actually won the popular vote by 4 points, these polls were off by 2, 3 and 4 points each — all in Biden's favor. On average, that means Pollster A's polls had an error of 3 points and an identical bias of 3 points toward Democrats.

Pollster B, on the other hand, released two surveys showing Biden up by 8 and 12 percentage points each and one survey showing Trump up by 2. These polls were off by 4, 8 and 6 points, respectively, for an average error of 6 points — higher than Pollster A's. However, Pollster B's polls were less biased; Biden's average lead in its polls was 6 points, meaning Pollster B's polls were only biased toward Democrats by 2 points.

When we create polling averages, we want them to be not only accurate, but also unbiased. And to produce unbiased predictions, we need a lot of unbiased polls. (Nate Cohn at The New York Times discovered something similar when developing their own polling average methodology in 2020.) An accurate, but biased, set of pollsters will still yield a biased aggregate on average.

Think about this another way. If most polls in a race overestimate the Democratic candidate by 10 points in a given election, but Pollster C's surveys overestimate Republicans by 5, there may be something off about the way Pollster C does its polls even if its accuracy is higher. We wouldn't necessarily expect it to keep outperforming other pollsters in subsequent elections since the direction of polling bias bounces around unpredictably from election to election.

Transparency matters

The second novelty in 538's new pollster ratings is Transparency Score.

There are several ways to assess how much we should trust a pollster (and, therefore, how much weight we should put on its data in our models). The most direct measurement is the aforementioned quantitative evaluation of pollsters' track records; the idea here is that the most trustworthy pollster is the one that has been most accurate historically. That works well for pollsters that release a lot of polls across different types of races and years. But we don't have that information for all pollsters, and many firms change their methods over time, making their past performance less predictive of future results.

For these pollsters, it turns out that we can use a pollster's transparency as a proxy for future performance. For example, 538's research has found that pollsters that share their data with the Roper Center for Public Opinion Research or participate in the AAPOR Transparency Initiative tend to be more accurate than pollsters that don't. The chart below shows our weighted average POLLSCORE for these two groups of pollsters:

In 2000, pollsters that were members of the AAPOR Transparency Initiative or shared their data with the Roper Center as of December 2023 were about 2.3 points more accurate than an average pollster and over 3.5 points more accurate than pollsters who didn't participate in the AAPOR Transparency Initiative or share data with Roper. That difference had shrunk to about 1.8 points in 2022, but it remains statistically significant. All else being equal, you should almost always prefer a pollster that participates in one of these organizations over a pollster that does not meet either of those criteria. For this reason, 538's old pollster ratings took AAPOR Transparency Initiative and Roper participation into account.

Our new ratings go a step further by incorporating a direct measurement of pollster transparency: Transparency Score. We developed this metric in collaboration with Mark Blumenthal, a pollster, past 538 contributor and co-founder of the (now sadly defunct) poll-aggregation website Pollster.com. Blumenthal has found that pollsters that released more information about their work tended to be more accurate during the 2022 election cycle. Therefore, having a specific score for each pollster's transparency should give us even more information with which to predict how well it will perform in the future. It also brings our definition of "trust" in a pollster closer to how scientists peer-review each other or how journalists vet other types of sources (data-driven journalism is, after all, journalism).

America's best pollsters

Now for the moment you've all been waiting for — which pollsters actually score the best by our new metric?

There are some familiar faces here. The New York Times/Siena College, for example, is the most accurate pollster in America. Due to its accuracy and transparency, it and ABC News/Washington Post are also the only two pollsters with a three-star rating (although 538 is part of ABC News, we developed this methodology without the input of ABC News’s polling team and did not know how it would affect their rating). The Marquette University Law School poll is also America’s most transparent, owing to the abundance of information it shares about how it conducts its polls. The other pollsters benefit from a mix of accuracy and transparency.

However, a word of caution here. The precise values for each score — and, therefore, each pollster's rank — are subject to a good amount of measurement error. Although we have a lot of quantitative tools to account for a poll's sampling error and the difficulty of certain races to poll, there are some factors we simply cannot adjust for. To illustrate this point, I re-ran our pollster-rating computer program 1,000 times, each time grading pollsters based on a random subset of their polls in our database (this is what academics call "bootstrapping"). This yielded 1,000 different plausible pollster scores for each organization. In the table below, I show the median, 5th percentile and 95th percentile of a few key firms' overall ranks across the simulations.

As you can see, there is a fairly wide range for many pollsters' potential ranks. The width of that range depends both on how many polls the pollster has published and how often it does well or poorly compared with the competition — in essence, whether it's getting "luckier" than others.

All this is to say you should not sweat small differences in the pollster ratings. A pollster's exact rank is less important than its general position: Pollsters near the top of the list are very trustworthy; those near the middle are good but have minor issues; and you should be wary of data from pollsters near the bottom.

Conclusion

That's it for now! Our new methodology for rating pollsters is designed to provide readers with a quick, accessible way to identify the most accurate, unbiased and transparent polls in America. As the polling landscape evolves, our strong belief is that the best pollsters are those that both performed well in the past and show their work today. Check out the full rankings on our interactive dashboard.

Footnote

*While we haven't yet updated our polling averages to account for the new pollster ratings, we will do so in the near future.

CORRECTION (Jan. 25, 2024, 11:25 a.m.): A previous version of this article incorrectly stated that Mark Blumenthal has found that pollsters that release more information about their work tend to be more accurate in the long run. Blumenthal’s finding was that pollsters that released more information about their work tended to be more accurate during the 2022 election cycle.