Dissecting the “Bradley Effect”

Oct 15, 2008 8:20am

Barack Obama’s lead over John McCain has reignited debate about the alleged “Bradley effect” – the notion that, in polls, white people lie about voting for a black candidate. It remains, at best, what I called it nine months ago: a theory in search of data.

The Bradley effect and its variants (Wilder effect, Dinkins effect), attempt to explain why the level of support for white candidates was understated, or for black candidates was overstated, in pre-election polls in six biracial contests from 1982 to 1992. That's right – anywhere from 16 to 26 years ago.

Two problems: There are, in fact, beaucoups reasons beyond lying that those polls could have been wrong. And there have been plenty of accurate polls in such contests since.

Consider 2006: Good-quality polls were accurate in five U.S. Senate or gubernatorial elections in which white and African-American candidates faced off. (There was no good, publicly released polling in a sixth). One, in Massachusetts, understated the white candidate by 5 points (not that it mattered in a 55-35 percent blowout); another, in Pennsylvania, understated the black candidate by 5 points (in a 60-40 rout). But as for a consistent Bradley or Wilder effect: It didn’t happen.

Or take the 2008 Democratic primaries: We find 25 good-quality pre-election polls in which Hillary Clinton was understated beyond sampling error, as the Bradley effect posits. But we also see 24 polls in which Obama’s support was understated – the reverse of the Bradley effect. And there were three in which Clinton was overstated, also contrary to the effect. In none was Obama overstated. An additional 21, moreover, were within sampling error on Clinton, as were 23 on Obama.

We also looked at this another way, via the margin between the candidates. In final-week, airworthy polls, Clinton was understated on the margin in five states – but Obama was understated in 10. The gap was close to the final result in seven more.

It seems fair to suggest an effect, to be an effect, should occur with some consistency.

Though not limiting himself to good-quality polls, a Harvard researcher, Daniel J. Hopkins, wrote a paper in August assessing polling data from 133 biracial elections from 1989 to 2006 – “the first large-sample test of the Wilder effect.” Hopkins concludes that there once was an effect; blacks running before 1996 averaged 2.7 percentage points below their poll numbers. “Yet this effect subsequently disappeared,” Hopkins reports. His paper’s title begins: “No More Wilder Effect.”

Beyond arguing general “social desirability” in whites saying they’d vote for a black candidate (and I’m not sure I get that), some theories more specifically suggest that some poll respondents will tailor their stated vote preference to their perception of the interviewer’s race – that whites, for instance, won’t tell a black interviewer they support McCain. Our own, current data tell a different story: We’ve been comparing the race of interviewers and respondents in ABC News/Washington Post polls all year, and we see no such effect.

Among registered voters in a dozen national ABC/Post polls this year, 53 percent of white respondents told white interviewers they supported McCain – as did 52 percent of white respondents speaking with black interviewers. Forty-one percent of whites told white interviewers they supported Obama; an identical 41 percent said the same to black interviewers. And 93 or 94 percent of back respondents backed Obama, regardless of the interviewer’s race. (It’s unclear, moreover, how well respondents can even tell the interviewer’s race in a telephone survey. In one of our recent polls 55 percent identified it correctly, but 23 percent were wrong and 22 percent declined to hazard a guess.)

There is an academic study from 1991 with a contrary finding, saying that whites, when speaking with a white interviewer, were 8 to 11 points more apt to support the white candidate in Doug Wilder’s 1989 Virginia governor’s race. But that was one poll, with just 172 white respondents, carried out by student interviewers – and the report itself footnotes a context effect in the study "that may heighten the race-of-interviewer effect." Our own aggregate data this year, showing no effect, include 7,261 white respondents.

Another concern focuses not on directly lying, but simply ducking the question by declining to state a vote preference at all. In our polls this year, however, the number of respondents who've either declined to answer the horse-race question, or say they have no opinion, is no higher than in past elections, nor do they look different demographically in any way that would suggest that their absence creates bias.

The notion of a Bradley effect is undermined by contemporaneous data as well. As a Pew Research Center memo points out, at the same time that two polls clearly overstated Wilder’s lead in the 1989 Virginia governor’s race, a third gave him a closer, 4-point edge. And while polls in Carol Moseley Braun’s 1992 Senate race understated her white opponent’s support, another, in her primary race, understated her own support, not her white opponents’. Further, while the final WABC/Daily News poll in 1989 overstated David Dinkins’ lead for New York mayor, two weeks earlier it had Dinkins ahead by just 4 points. (He won by 2.) Did New Yorkers suddenly decide to prevaricate?

Consider also a Real Clear Politics piece by Republican pollster Lance Tarrance, who polled in the very 1982 California governor’s race that gave the Bradley effect its name. He calls the concept “a pernicious canard,” and says his own data for Republican George Deukmejian didn’t reflect it. And in a February 1983 report on the Bradley race, California pollster Mervin Field speculated that “racial bias” was a factor, but also offered more concrete reasons for his poll’s error: a heavier than usual and one-sided absentee vote, the presence of a handgun initiative on the ballot and a lower turnout of minority voters than he’d expected.

The turnout issue is critical. When a pollster produces a bad estimate in a pre-election survey, it’s an easy out to blame the respondent. (Visions of New Hampshire are dancing in my head.) But in fact there’s a boatload of ways pre-election polls can go wrong – being done too far from the Election Day, employing poor sampling techniques and, especially, misestimating who’s actually going to vote. It could well be that the absence of a Bradley effect since the mid-’90s isn’t about fewer people lying, but about polls simply producing better likely voter models.

All this leads to the current election. The role of race is a fascinating element, and no one can confidently predict how it’ll play out. One study says polls may be understating Obama by 3 to 4 percentage points nationally. Another, not specific to any reputed polling effect, suggests Obama could be doing as many as 6 points better if it weren’t for latent racism. (I’m skeptical, as usual, and awaiting details on the model.)

Our own study this summer found that there are some whites who are ill-disposed toward a black candidate, but others who are well-disposed to one, and on balance Obama’s support from whites has been around the average for Democratic candidates in the last eight presidential elections. In our latest poll, moreover, 91 percent of registered voters called themselves comfortable with the idea of Obama becoming the first African-American president; 74 percent were "entirely" comfortable with it.

One thing we know for sure is that today’s polls do not predict the election outcome; they simply show where the contest stands today. If the final polls are accurate – as generally they’ve been in the past – a lot of ink will have been spilled on the Bradley effect in vain. If not, there’ll be many places to look. Blaming respondents should not be the first.

You are using an outdated version of Internet Explorer. Please click here to upgrade your browser in order to comment.
blog comments powered by Disqus