Margin Of Error: Breaking down the polls

Margin Of Error: Breaking down the polls

Why did pollsters miss Cantor's slide?

Posted June 20, 2014

It was shocking when Eric Cantor, U.S. House GOP majority leader, lost his primary race to challenger David Brat. Incumbents rarely lose, and Cantor lost by a whopping 11 percent. Adding to Cantor’s embarrassment, it was reported that his campaign spent more money at steakhouses than Brat did running his entire campaign.

Why did Cantor lose? Speculation ranged from voter opposition to immigration reform, to the belief that Cantor lost touch with district, or that Tea Party supporters were simply fed up with so-called establishment Republicans.

Perhaps more germane to this blog, Cantor’s pollster, McLaughlin & Associates, had him winning by 34 percent just 12 days before the election. Cantor was also up by 12 percent in a different poll commissioned by the Daily Caller just eight days before the election. How could two polls conducted so proximate to the election be so wrong?

Local pundit Barry Saunders suggested that people were deliberately lying to pollsters. In fact, Saunders urges people to lie to pollsters. He wrote, “For years, I’ve been suggesting, as have many others, that people who are polled about for whom they plan to vote and why simply lie or, as it’s known in politics, prevaricate.”

Not only do I think it is a horrible suggestion, but it is not the reason that polling in the Cantor race was way off. People rarely mislead pollsters about voting intentions. The exception to that rule is the fact that in the past white voters sometimes overstated their likelihood of voting for black candidates.

I am sympathetic to concerns that too much polling occurs just to see who is ahead. But polls I and other academics conduct are designed to understand why voters make their choices. Lying in these polls would undermine decades of research about the factors shaping the decisions whether to vote and whom to vote for.

The best explanation, I think, is twofold. First, polling in low-turnout primaries can be difficult if the voters who show up are different than in past elections. Second, polling further away from the election day is less accurate, especially if the race becomes more competitive.

The second Cantor poll, closer to the election, was more accurate than the earlier one, though both were amiss. Yet, no polls took place days before the election; in retrospect, there was a clear trend of preferences moving in favor of Brat. One can see a similar outcome in the North Carolina Senate primary where Thom Tillis was barely leading just a month before the election, yet he wound up winning well over 40 percent of the vote and avoiding a runoff.

Moreover, while turnout in the primary was only around 15 percent, about 20,000 more voters showed up in 2014 than in 2012. These added voters were less likely to be Republican, it appears. In Virginia, you can request a primary ballot for either party’s primary – you do not need to be a registered member of a party to do this. Apparently, more self-identified Independents and even some Democrats took part in the Republican primary. McLaughlin’s post-mortem seems to show these effects.

It even led to the theory of a “Cooter Effect." Cooter was the fictional character played by Ben Jones, a Democrat, who lost to Cantor a decade ago. Jones urged anyone and everyone to vote, and to vote against Cantor.

Yet, McLaughlin would have you believe their massive error was attributable only to the difference between their assumed set of voters and the actual kind of people who voted. I am unpersuaded demographics alone were to blame. It seems like some kind of combination of timing of the poll, unexpected turnout and a third factor is at play. That third factor, I suspect, involves polling methodology and response bias.

I noticed that their sample was obtained by calling both landlines and cell phones, but only 25 percent of respondents were contacted via cell phones. The Pew, in contrast, easily one of the most authoritative in the business, now contacts at least 50 percent of its respondents by cell phone. Next, the response rate was not reported, but since most surveys are lucky to achieve a 10 percent response rate, reaching those most disposed to vote for Cantor might have skewed the findings.

For this explanation to be correct, the kinds of people least likely to vote for Cantor would also be the kinds of people least likely to answer their phone and talk to a pollster about their vote preference. I’ll have more to say about that in a follow up post about response bias. For now, I think its best to point out that election polling is on average pretty accurate. It is least accurate with low-turnout elections, such as primaries, further away from the election day, with unnoticed factors affecting turnout.

The real problem, I think, is the false confidence that is encouraged in polling. Sometimes, the polls are just wrong, but that doesn’t mean we should reverse course and never believe them either. We just need to do a better job warning consumers about their limitations.

Comments

Please with your WRAL.com account to comment on this story. You also will need a Facebook account to comment.

Oldest First
View all