Margin Of Error: Breaking down the polls

Margin Of Error: Breaking down the polls

They asked only 600 people?

Posted March 7, 2014
Updated March 25, 2014

— Sometimes, news stories on the same day report on two or more surveys with different results about the same topic. Is candidate "A" ahead of candidate "B" by 3 percentage points or trailing by 2 percentage points? In fact, this sort of thing occurs frequently during elections, when polling for public consumption is most prevalent.

When the polls diverge, which ones should you believe?

Although your first instinct might be to dismiss both surveys – or perhaps just the one showing your candidate faring worse – the truth is that both survey results are probably equally accurate. There are a few core factors common to all surveys that usually can explain their discrepancies, even significant ones.

Within the margin of error

The margin of sampling error is the most likely explanation for why survey results appear to be different when they are not really distinguishable. Unfortunately, the margin of sampling error is sometimes unreported. Even when it is reported, its implications are not discussed.

Imprecision is sometimes referred to as the price one pays for not interviewing every eligible person in the population. Yet, when polling is based on a probability sample, we can calculate the amount of imprecision in our estimates. That calculation is based on two things: the size of the sample and the population from which it was drawn. You can do this calculation for yourself with an online margin of sampling error calculator.

Sampling error refers to the difference between our survey estimate and what we would have found had we been able to interview everyone in the population. When it is reported, it is described as “plus or minus” some percentage, for example: +/-3 percentage points. Larger samples have smaller margins of sampling error, and vice versa.

The graph below shows the relationship between the sample size and the margin of sampling error. Notice that increasing the size of the sample eventually leads to diminishing returns of increased precision. That is why most polls include either around 600 respondents or around 1,000 respondents. Polls are expensive, and if adding more respondents doesn’t significantly increase how accurate they are, then it is usually not worth the costs of doing so.


AAPOR: Margin of Sampling Error vs. Sample SizeSource: American Association for Public Opinion Research


Sampling error can therefore explain how different survey results from two perfectly reputable firms are not actually distinct.

Outliers happen

Imagine watching a news story on WRAL News where Poll X is reported to have found that Gov. Pat McCrory has a 51 percent approval rating. Later that day, you see another news story about Poll Y where McCrory is said to have a 46 percent approval rating. The likely headlines for each story might also contribute to the perceptions that these two polls are different, because a majority approves of McCrory in one case and a minority in the other.

If each poll included around 600 respondents, then the margin of sampling error for each estimate is about +/-4 percentage points. For Poll X, this means that anywhere between 47 percent and 55 percent of Americans approve of McCrory. Likewise, this means that anywhere between 42 percent and 50 percent of Americans approve of McCrory in Poll Y. Since these two estimates have ranges that overlap, they are, statistically speaking, indistinguishable.

Mark Blumenthall, now at Huffington Post's Pollster, has provided a good example of how this same principle applies to differences across a larger number of polls about the same topic. He demonstrates that a large number of polls generating seemingly dissimilar results actually produced estimates within the margin of sampling error, except for just one or two of them.

In fact, over a large number of polls, we expect some survey results will be "{"outliers," ones that fall outside the margin of sampling error compared to all others. These outliers could be harbingers of emerging trends, waiting to be confirmed by future polling. Or, they could be the results of "bad" samples that do not accurately represent the population from which they were drawn. Bad samples occur naturally by chance, and no pollster is immune to this happening. 

Sampling error might be the only kind of error of survey error that can be precisely quantified, but it is far from the only type that affects the accuracy of surveys. Other critical factors include the method of contacting respondents, the timing of the polls, question wording, question order, answer options and the nature of the population being surveyed.

Comments

This blog post is closed for comments.

Oldest First
View all