So what happened with the 2016 polls anyway?

Short version: national polls were actually really good.  State polls did have a number of problems, mostly due to the fact that education became more associated with presidential vote and state polls were less likely to correct for over-sampling more educated voters.

First, a nice review from Nate Cohn:

Education was a huge driver of presidential vote preference in the 2016 election, but many pollsters did not adjust their samples — a process known as weighting — to make sure they had the right number of well-educated or less educated respondents.

It’s no small matter, since well-educated voters are much likelier to take surveys than less educated ones. About 45 percent of respondents in a typical national poll of adults will have a bachelor’s degree or higher, even though the census says that only 28 percent of adults (those 18 and over) have a degree. Similarly, a bit more than 50 percent of respondents who say they’re likely to vote have a degree, compared with 40 percent of voters in newly released 2016 census voting data.

This was a big deal in 2016, since Mrs. Clinton fared very well among well-educated voters. Her lead might have increased by around four percentage points in a typical national survey that wasn’t weighted by education. The effect shrinks in polls weighted more heavily, including by party registration or past turnout, but there were virtually no public polls that were weighted this way…

What’s very clear is that several typical sources of polling error, in addition to the education issue in lower-quality state polls, contributed to a pro-Clinton bias in pre-election polls. It may or may not explain all of the error, but it probably explains most of it.

That seems to be positive news for pollsters. Without a good explanation for the misfire, it would be understandable to wonder whether lower response rates had degraded the political survey research to the point where our metaphorical field goal kicker ought to consider retirement.

But the education gap among supporters of Mr. Trump and the eventual Democratic nominee will probably persist into 2020, and it will be especially challenging to pollsters in the Northern states that had a relatively high percentage of working-class whites and that may again play an outsize role in the Electoral College.

The lack of high-quality state polling will also be tough to fix, especially if the cost of polling rises further or if local newspaper budgets continue to shrink or vanish. The failure of many state pollsters to even ask respondents about education does not inspire much confidence in their ability to stave off less predictable sources of bias.

Many of the challenges that pollsters faced in 2016 aren’t going away. Next time, the challenges could easily be greater.

Also, a really good thorough look at things from an impressive group put together by the American Association of Public Opinion Research.  Here’s some good stuff from the executive summary:

National polls were generally correct and accurate by historical standards.  National polls were among the most accurate in estimating the popular vote since 1936. Collectively, they indicated that Clinton had about a 3 percentage point lead, and they were basically correct; she ultimately won the popular vote by 2 percentage points. Furthermore, the strong performance of national polls did not, as some have suggested, result from two large errors canceling (under-estimation of Trump support in heavily working class white states and over-estimation of his support in liberal-leaning states with sizable Hispanic populations).

State-level polls showed a competitive, uncertain contest…  In the contest that actually mattered, the Electoral College, state-level polls showed a competitive race in which Clinton appeared to have a slim advantage. Eight states with more than a third of the electoral votes needed to win the presidency had polls showing a lead of three points or less (Trende 2016).[2] As Sean Trende noted, “The final RealClearPolitics Poll Averages in the battleground states had Clinton leading by the slimmest of margins in the Electoral College, 272-266.” The polls on average indicated that Trump was one state away from winning the election.

but clearly under-estimated Trump’s support in the Upper Midwest.  Polls showed Hillary Clinton leading, if narrowly, in Pennsylvania, Michigan and Wisconsin, which had voted Democratic for president six elections running. Those leads fed predictions that the Democratic Blue Wall would hold. Come Election Day, however, Trump edged out victories in all three.

There are a number of reasons as to why polls under-estimated support for Trump. The explanations for which we found the most evidence are:

  • Real change in vote preference during the final week or so of the campaign. About 13 percent of voters in Wisconsin, Florida and Pennsylvania decided on their presidential vote choice in the final week, according to the best available data. These voters broke for Trump by near 30 points in Wisconsin and by 17 points in Florida and Pennsylvania.
  • Adjusting for over-representation of college graduates was critical, but many polls did not do it. In 2016 there was a strong correlation between education and presidential vote in key states. Voters with higher education levels were more likely to support Clinton. Furthermore, recent studies are clear that people with more formal education are significantly more likely to participate in surveys than those with less education. Many polls – especially at the state level – did not adjust their weights to correct for the over-representation of college graduates in their surveys, and the result was over-estimation of support for Clinton.
  • Some Trump voters who participated in pre-election polls did not reveal themselves as Trump voters until after the election, and they outnumbered late-revealing Clinton voters. This finding could be attributable to either late deciding or misreporting (the so-called Shy Trump effect) in the pre-election polls. A number of other tests for the Shy Trump theory yielded no evidence to support it.

So, there you have it.  What was really blown was people over-interpreting the polls in their prediction models (something the AAPOR report also discusses– here’s looking at you Sam Wang).  But, as far as the pollsters in 2016, yes, there were some errors in state polls especially, but really, on the whole an entirely reasonable performance.


About Steve Greene
Professor of Political Science at NC State

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: