The New Hampshire Polls: What We Know

Jan 11, 2008 11:24am

(Note: This post was updated at 2 p.m. with a data point that came late to our attention.)

Efforts so far to explain the New Hampshire poll meltdown amount to theories in search of data; we don’t yet have the hard evidence and full, thoughtful evaluation we need. But two of the most current explanations are to my mind the weakest: that the polls were right when taken, but missed a late Clinton surge; or that respondents lied.

There’s good empirical data to rebut the first argument, a late Clinton surge. And the racially based theory of dissembling respondents needs more data to be persuasive.

But again – not to prejudge. The short answer is that we don’t know yet where these polls went wrong. This post will try to move along our knowledge in three ways: to examine the facts of the failure, summarize the theories and explore avenues for inquiry.

One aside: I’ve been joking that what I like best about the final New Hampshire pre-election polls is that I didn’t do any of them. That may seem to put me in the classic definition of a critic: The person who, after watching the battle from the hilltop, rides down and shoots the wounded.

The reality is that several of these polls were produced by experienced, consummate professionals; what I really think when I look at their New Hampshire data is that there, but for the grace of God, go I. For all our sakes, we simply need to know what happened.

Here’s the state of play:

The Failure
Nine final pre-election polls were released Monday or Tuesday morning in the New Hampshire Democratic primary; each had Obama numerically ahead, by margins from 3 to 13 points, averaging 8 points. The Clinton and Obama campaigns’ internal polls are reported to have shown him ahead as well. Clinton won by 2 points, 39-37 percent.

Of the nine public polls, five have been rated as airworthy by ABC News’ methodological standards. They did no better: A range of +4 to +13 for Obama.

Three of the nine polls were conducted Friday-Sunday, three Saturday-Sunday, one Saturday-Monday and two Sunday-Monday. All were done by telephone, with sample sizes from 323 to 862. Gallup’s (Obama +13) included interviews with cell-phone-only respondents. A Gallup national poll, Friday-Sunday, also showed movement to Obama; he tied Clinton 33-33, their closest in any national survey.

All the polls understated Clinton’s support (by 5 to 11 points, average 9) rather than overstating Obama (+5 to -5, average 1). They reported “undecided” voters (which we regard as function of polling technique rather than a measure of true indecision) in a range from 2 percent to 12 percent.

Final polling in the New Hampshire Republican race came closer to the election outcome. Seven of eight public polls had the correct order of finish of the top two candidates. The average McCain lead was 5 points, matching his winning margin.

The Background
Pre-election polls have a remarkable history of accurately predicting election winners. A review of 2004 general election polls by the National Council of Public Polls found an average error on each candidate of 2 points in state polls and 1 point in national polls. The en masse failure of the New Hampshire Democratic primary polls is unprecedented.

Pre-election polling is more complex than other survey research because it requires examining the attitudes of a population that does not yet exist – voters. These polls are required to estimate a “likely voter” population based on propensity to participate. Models used by each organization are idiosyncratic and generally treated as proprietary.

A further complication is that while underlying attitudes on issues are likely to be stable, vote-preference choices may evolve over time, not crystallizing for some voters until Election Day approaches or arrives; this can be so particularly in primaries, in which  preferences are not stabilized by political party identification. A final complication is turnout, which can be affected by unanticipated variables such as weather or get-out-the-vote drives.

While we can’t yet conclude what occurred in New Hampshire, as a rule, problems in likely voter modeling are the chief suspect in bad final estimates of vote preference.

The Theories
Insufficient data are available to fully analyze the New Hampshire Democratic polls. What’s needed is a review of overall turnout estimates, the size of population groups within the “likely voter” population, and the vote preference in each of these groups. We’ve requested these here, and we do expect an analysis from a competent independent arbiter such as the American Association for Public Opinion Research.

As noted, some pollsters tend to blame poor final pre-election estimates on one of two factors: Late deciders or misrepresentation by respondents. These sound somewhat self-serving, and my first instinct is always to look inside – at sampling or modeling – rather than outside.

In any case a variety of theories have been proposed. As of yet they are theories only. A summary follows:

1) Late deciders
It’s postulated that there was a sharp swing to Clinton among voters who made their decisions in the last day or two of the campaign; in some arguments this is described as a shift of women to Clinton after she became emotional in a campaign appearance Monday.

No data have been presented to support this late-swing theory, and several data sources rebut it. First: Three of the nine polls were conducted through Monday night; if there were a shift toward Clinton that night presumably they would have caught it. They did not; Obama was +13, +9 and +5 in these polls – all in fact slightly better for Obama than their results in polling completed the previous night.

Second, the exit poll asked voters the time of their decision. Seventeen percent said they decided on Election Day; they voted for Clinton over Obama by a 3-point margin, 39 to 36 percent – hardly a significant swing from the overall result (Clinton +2). Those who said they decided in the previous three days, 21 percent, favored Obama over Clinton by 3 points, 37-34 percent – further deflating the late-decider argument. Those who decided previously, 61 percent of voters, favored Clinton over Obama by 41-37 percent.

Some of the New Hampshire pollsters have said they saw higher-than-usual changeability in the electorate – a quarter or more saying they might change their minds. I wish that, seeing this, more of them had stayed in the field Monday night. Neither of the two that did, nor the exit poll, indicate a late Clinton surge. 

Note further that the nine pre-election polls showed Obama leads regardless of their level of “undecideds” – Obama was +4 in one poll, with 12 percent undecided, but +13 in another, with 2 percent undecided. That would not argue in favor of arbitrarily moving all undecideds in these polls to Clinton’s favor.

2) Turnout surge
In a corollary to the late-decider theory, it’s been suggested that an unexpected surge of sympathetic older women delivered the contest to Clinton after her show of emotion on Monday. Our election night analysis noted overwhelming support for Clinton among women aged 65 and up, 57-27 percent. But their turnout was not up disproportionately; they accounted for 7 percent of all voters, vs. 6 percent in 2004 and 8 percent in 2000.

While we anticipate data from the pre-election pollsters among 65+ women, the sample sizes of this subgroup may be inadequate for firm conclusions.

Turnout among women overall likewise was not up, but average; they accounted for 57 percent of Democratic voters, compared with 54 percent in 2004, 62 percent in 2000, 57 percent in 1996 and 54 percent in 1992.

3) Ballot Order
As published on our site, Prof. Jon Krosnick of Stanford University argues a ballot-order effect. Polls generally randomize the order in which candidate names are offered. The New Hampshire ballot listed them without randomization. Clinton’s name was near the top, Obama’s near the bottom. Krosnick’s research suggests this would have added an estimated 3 points to Clinton’s vote total.

This is a seemingly plausible argument that could explain some (but not all) of the discrepancy with pre-election polls. It suggests, in effect, that the polls were less wrong than they look (by varying degrees), and the election itself was distorted, to some extent, by ballot order.

We’ve found one possibly supporting fact: Uniquely, as far as we have ascertained, a pre-election poll that was closest to the correct outcome (Obama +1, Suffolk University, Saturday-Sunday) did not randomize candidate names – it read them alphabetically, as they appeared on the ballot. (However, Suffolk moved to Obama +5 in its final Sunday-Monday estimate, and it was the only polling outfit to muff the order of finish in the GOP race.)

4) “Bradley Effect”
It’s been suggested that in some past elections involving a white and a black candidate, many years ago, pre-election polls understated the white candidate’s support. (The Bradley effect is named for Tom Bradley’s 1982 race for California governor.) This has led to suggestions that some whites are reluctant to express support for a white candidate in a biracial race for fear of being perceived as racist.

This postulated effect ties into studies finding that some respondents give different answers to polling questions based on their perception of the interviewer’s race. The only published work we’ve seen to put this in an election context (holler with more) is a 1991 study finding that whites were 8 to 11 points more apt to support the white candidate in the 1989 Virginia governor’s race when speaking with a white interviewer.

There are some problems, though. I’d like to see more than one study, of just 172 white respondents, carried out by nonprofessional student interviewers. (And the report footnotes a contextual effect in this particular study "that may heighten the race-of-interviewer effect.") The Bradley effect has been raised in six elections, all 15 to 25 years ago. We’re aware of many other, more recent biracial races (five Senate or governor races in 2006 alone) in which the pre-election polling was quite accurate. It didn’t happen in Iowa. And this contest can hardly be described as racially charged. The effect, if it exists at all, is at best inconsistent.

Pew pollster Andy Kohut, in yesterday’s New York Times, added the argument that poorer and less-educated whites who do not respond to surveys have more racist views than those who do respond. Knowing more about that would be valuable.

Clinton did do best with voters lower on the socioeconomic scale, and low SES individuals are harder to reach in polls. Sample weighting should adjust for this, but it’s another area worth further examination.

5) Iowa-related Motivation/Demotivation
I’d add another theory – admittedly again lacking supporting data. The Iowa and New Hampshire contests were compressed as never before. Obama rode a wave of enthusiasm out of Iowa; Clinton was deflated. Obama supporters in New Hampshire may have been encouraged, Clinton supporters demotivated, to express support for their candidates. And Clinton voters may have been less apt to pass likely voter screens based on expressions of intention to vote, enthusiasm, strength of support or attention to the contest.

6) Modeling and Sampling
I’ve already discussed likely voter modeling. I’ll add the issue of sampling. Saturday is a bad day to conduct opinion polls – people aren’t home. Sunday daytime is bad as well – people are at church, shoveling snow, watching football games. Sunday evenings are better, as are weeknights after work.

Further, polls done over two nights lack best-practice sample management techniques. The customary field period for most ABC News polls is four nights. There are a variety of best-practices techniques that can be built into short time-frame or tracking polls; it may be useful to know which were used in New Hampshire.

The Lessons
Even before we have solid answers, there’s a lesson in the wreckage. You’ve heard it here before: I’ve long argued for de-emphasizing horse-race reporting in our election coverage. (“Throttle back on the horse race,” as Jon Cohen and I put it in our Washington Post op-ed Dec. 30.)

The better use of pre-election polls is less to predict outcomes and more to set the table for our election coverage by informing our judgment on the contours of the contest. Not solely who’s ahead, but how and why voters are coming to their choices – what issues motivate them, what candidate attributes attract or repel them, how groups are dividing, which candidates are major players and which not. If one fallout from New Hampshire is to get us all to ease up on the horse race a little, that would be a good result of some apparently bad polls.

You are using an outdated version of Internet Explorer. Please click here to upgrade your browser in order to comment.
blog comments powered by Disqus