From methods to the badness: Deciphering the presidential polls

· 5 min read

From methods to the badness: Deciphering the presidential polls

There’s plenty for voters to digest in the final days of the 2016 presidential campaign – an election that may be remembered, beyond controversies and late surprises, for the intense debate over polling and its role in the race.

We asked University of Nebraska-Lincoln researchers Dona-Gene Barton, Kristen Olson and Jolene Smyth to chime in: What goes into polls, how they are interpreted (and misinterpreted) and whether they may have an impact on the Nov. 8 vote.

Polls and science

Smyth, a sociologist who studies survey methodology, said it’s important to examine how each poll is built. As director of the university’s Bureau of Sociological Research, she said scientific polls have to be representative of the population and must be comprised of questions respondents will accurately and truthfully answer.

“A lot of science goes into achieving both of these,” Smyth said. “For example, if our sample unit is individual people, we want to draw both young and old, married and unmarried, poor and well off, liberal and conservative, etc.”

Olson, a sociologist who focuses on survey methods and why non-response, measurement and coverage errors occur in surveys, said mobile telephones are an emerging and important factor.

“Different survey organizations use different methods to conduct pre-election polls and create estimates from those polls. One big issue in polling is the frame, or list of people or households who are eligible to be sampled for the poll,” she said.

Polls done with audio recordings, or Interactive Voice Response, don’t include cell phones in results, she said. This means that any IVR-based poll systematically excludes anyone who can be reached only on a cell phone. Nationally, this is estimated at about 48 percent of adults.

Barton, a political scientist who studies Americans’ political behavior and how well we use information to form our judgments, said polls not including mobile telephones tend not to be as accurate, as was seen in 2012’s presidential race between Republican Mitt Romney and President Barack Obama. Many polls, including internal Romney ones, predicted the GOP nominee would edge out a victory. Following the election, pollsters reassessed their sampling techniques, which did not always account for an electorate that was increasingly abandoning land lines.

Disparities, disparities, disparities

Nearly every major news organization seems to have its own poll, with notably different results. To help explain this, Smyth and Olson drew on how pollsters may be sampling “likely voters.”

Just over half of the voting-eligible population nationally votes, Olson said. The estimate of whom voters will select requires identifying the people who are likely to vote.

“Different pollsters have different methods for identifying ‘likely voters’ based on historical voting rates – getting this right can be hard,” Olson said.

Pollsters have a lot of previous years’ data to help formulate likely voter models and estimates of voter turnout, Smyth said: “That data has helped them be fairly accurate throughout history, but data from the past cannot perfectly predict what will happen in any current election year.”

So how do you find the signal amid all the noise?

It’s good to not focus on any single poll and to look across multiple polls, Olson said. Watch for independent polling organizations that are transparent about their methods. Some polling aggregators, such as, can be helpful because they rate the quality of the methods used by the different polls.

Olson also suggested The New York Times’ The Upshot as a useful aggregator tool.

Skewing the numbers

Opt-in polls, which come in the form of quick-click surveys or polls on a website, aren’t scientifically sound, Smyth said – they don’t have a means to ensure that the respondent pool mirrors the population.

“Some are impressed at the sheer number of people who respond to a poll, but having a high number of respondents doesn’t make an opt-in poll any more scientifically rigorous,” she said.

That hasn’t stopped candidates, their surrogates or their supporters from latching onto opt-in polls if they’re telling them what they want to hear, Barton said. Following the first presidential debate in late September, Republican nominee Donald Trump tweeted that a series of polls showed that he won the face off with Democratic nominee Hillary Clinton. The problem was they were primarily opt-in, online polls.

When it comes to disbelieving credible polls, perceptions of bias may be of greater concern, she said. Overwhelming research shows that partisans are more likely to embrace and believe poll results that support their favored candidate and less likely to believe and accept information that runs counter to their beliefs, Barton said.

So, do polls even matter?

Sort of. For media outlets, citing polls can keep viewers tuned in to get a sense of the “horse race” aspect of the campaign. And they play a slight role when voters head to the polls.

“Concern arises every election that the media, by focusing on who is ahead or behind in the polls, may sway public opinion and influence the election results,” she said. “Evidence supports this effect, where voters may shift their support to be in line with the majority.”

But, Barton added, those effects are normally small – and are more of an issue earlier in the campaign, especially during party primaries in the spring.

“During the general election, vote intentions solidify,” she said.

Recent News