Margin Of Error: Breaking down the polls

Margin Of Error: Breaking down the polls

Welcome to Margin of Error

Posted February 27, 2014
Updated March 25, 2014

Logo for the Margin of Error blog.

— Welcome to Margin of Error. This blog will discuss and analyze survey research, paying particular attention to political polls.

I’m a political science professor at North Carolina State University, and I have taught public opinion research classes for more than 15 years. I also conduct surveys for local and state organizations and when pursuing my own research interests. The name “Margin of Error” reflects the fact that surveys are imperfect but that many of their imperfections are well known.

Surveys can be valuable, but they can also be misleading and misinterpreted. It is important for consumers of polls to know more about how they are conducted. This blog aims to help readers evaluate their results.

News reporters struggle with space limitations when discussing the "nuts and bolts" of surveys. This blog gives me the opportunity to more carefully evaluate individual surveys, discuss issues such as how their modes of administration, question wording, response options or other factors might have shaped their results.

My focus will be on reviewing polls of interest to North Carolina, but as you can see in the example below, I will cover national surveys too.

So let’s dive in with a recent example of the kind of quandary we will be talking about.

How popular is the president? 

Bloomberg recently released a poll indicating President Obama’s approval rating had "rebounded" from 42 percent to 48 percent. They wrote that this increase was "the biggest positive change of his presidency." The very next day, a Wall Street Journal/NBC poll found Obama’s approval rating was 41 percent. Contrary to Bloomberg, the WSJ wrote, "Obama’s approval rating hits new low."

What accounts for these different conclusions?

Both polls were conducted at almost identical times; they used the same question wording and response options; the approval question was placed second on each survey, coming immediately after the same first question; they both sampled adults 18 years and older; and they both called land lines and included a sub-sample of cellphones.

It is possible that some technical aspect of a poll could explain things, but there isn't enough information to say that's the case. Bloomberg has been less transparent with regard to its methods, so the public doesn’t know much about the demographics of its sample or the exact percentage of the sample that responded with cellphones. But in the end, the Bloomberg poll appears to be an outlier for some reason.

Sometimes, surveys are conducted with "bad" samples, where a random draw of respondents isn’t representative of the overall population. A poll with more Democrats or Republicans than truly exists in the population can skew a poll about political preferences. Bloomberg probably drew a sample that included too many Democrats, although I can’t verify that since they won’t release that information.

The reason I think Bloomberg is the outlier, not the Wall Street Journal, is because we can see how these two polls compare to other surveys. I prefer using survey data from Pollster at Huffington Post. Pollster allows you to examine similar polls, such as those conducted with live interviewers, and for samples of adults, not just registered voters. Once I filtered out surveys that were dissimilar, it looks like the WSJ poll fits with most others.

Obama’s approval in other surveys is consistently hovering closer to 40 percent rather than 50 percent. Take a look at the following graph, where the black line tracks Obama’s approval rating and the red line tracks his disapproval rating. On the other hand, Bloomberg’s estimate that disapproval of Obama is at 48 percent is well within line of the average of other polls.

Problems with reporting

News reports on these polls were misleading, contributing to the appearance of dramatic differences when they are not. This happens because news media usually collaborate with a specific pollster, and their subsequent coverage of their polls ignores other polling on the topic if a different pollster conducted it.

For the Wall Street Journal, the 2 percentage point decline in Obama's approval rating since January is within the margin of sampling error (plus or minus 3 percentage points), so it is not possible to tell if Obama’s approval rating was declining, or the 2 point change reflected random error introduced by sampling. Further, Obama has polled lower than 41 percent in other surveys, so it’s not accurate to say this was Obama’s lowest rating.

For Bloomberg, ignoring other polls that show a much lower approval rating gives the misleading appearance that Obama’s approval is rebounding – it’s probably improving only within their own polling. To be sure, Bloomberg is not the only poll to find Obama’s approval rating is closer to 50 percent than 40 percent, but a solid majority of all polls suggests approval is currently closer to 40 percent.

I asked reporters and editors of both stories if they had thoughts about this. The editor of the Bloomberg story suggested different sampling methods could be responsible, but I’m skeptical. Aaron Zitner from the Wall Street Journal was hesitant to comment since Bloomberg has not released important details about its sample that would be useful to know first.


Please with your account to comment on this story. You also will need a Facebook account to comment.

Oldest First
View all