
When a Siena College poll came out late last month and had Donald Trump as winning the key battleground state of Florida, it was a break with the trend.
“Our Florida poll was one of the few polls that had an advantage to Trump in the final days,” said Don Levy, director of the Siena College Poll. “When we were coming out with that we were seen as an outlier.”
But it turns out the outliers that put states like Florida, North Carolina and Pennsylvania within reach for Donald Trump, now president-elect, weren’t just in the ballpark; they called the ballgame.
Nationwide and state-by-state polls, polling averages, polling aggregations and models that project electoral outcomes based on polls all gave Hillary Clinton the nod as a heavy favorite to win the presidency. Some projections put the likelihood of a Clinton victory at a near certainty – 98 or 99 percent.
But the polls widely missed, and the prognostications came up short.
“The polls clearly got it wrong this time,” the American Association for Public Opinion Research said in a public statement Wednesday.
[How much did polls miss the mark on Trump – and why?]
That association, a Washington-based professional organization of pollsters and public opinion researchers, promised to convene a committee to study the 2016 presidential polling miss, comparing different methodologies and how they performed. It won’t be until May that the group is ready to release its conclusions.
“In the aggregate, there is something the pollsters need to take a look at,” Levy said. “Everyone that does those state-by-state polls that are transparent have to lay out their cards on the table and figure out the source of error.”
In the most general analysis what happened is easy to get one’s arms around: polls underestimated Republican turnout in rural areas and overestimated Democratic turnout in urban areas. But the reasons the polls missed those trends are more complicated. Did pollsters misinterpret the likely demographics of the electorate? Were Trump supporters less likely to answer pollster calls or less likely to reveal their true loyalties?
Levy said it’s too early to tell what went wrong with the polls that were off the mark this year, but he highlighted some areas that pollsters will need to analyze in the coming months. From his experience, Levy said he didn’t find it likely that Trump supporters weren’t expressing their support, pointing to New York congressional polls that gauged presidential support too.
“People told us they were going to vote for Trump and, as you can see from the congressional races, we got them all right,” Levy said.
The more likely problem, he said, was in the demographic makeup that pollsters expected to turn out on Tuesday, which discounted the number of white rural voters that would show up to the polls for Trump. And just as polls understated rural turnout, they overstated turnout in cities and among blacks, who favored Clinton.
Pollsters use historical turnout trends and demographic changes to project expected turnout, but Levy said pollsters may have to allow for “more fluid” turnout models in the future.
Why did Siena have a better read on Florida and North Carolina, where the poll estimated a toss up? Levy said it may have to do with the way that Siena weights the likelihood that a registered voter will actually cast a vote on Election Day.
Rather than tossing out poll respondents that don’t make it through a likely voter screen – used to determine whether or not the pollsters expect someone to actually vote and used to cut respondents out of a poll’s final analysis – Siena uses a probability scale that factors in a registered voter’s voting history and responses to determine the likelihood they will vote. Siena includes all of the people they reach in the final poll results, but weights their responses based on the probability they will vote.
For example, the preferences of someone who hasn’t voted in the past few elections and tells a pollster they aren’t certain if they will vote may be entirely wiped out of some polls. Using Siena’s method, however, those preferences would be captured and factored into the poll – albeit scaled to the probability that the person actually votes.
Ultimately, Siena pollsters said they were proud of how their polls measured up against the actual results. In New York, the poll predicted Clinton would win the state by 17 points; she ultimately won by 21 points. Siena predicted Chuck Schumer would win his Senate election by 42 points; he won by 43 points. And the poll correctly called all five congressional races it surveyed within the margin of error.
“We nailed everything,” Siena pollster Steven Greenberg said of the New York congressional races.
They weren’t perfect, however. In a late October poll, Siena gave Clinton a seven-point lead over Trump in Pennsylvania. Trump won Pennsylvania by just over a point on his way to Electoral College victory. The polling environment has been made more difficult by the diffusion of ways to reach voters and the unwillingness of some to respond to surveys, but pollsters are looking for new ways to sample where the public stands, Levy said. That doesn’t necessarily mean they will always get it right.
“There are continuing attempts to explore multiple methods of measuring public opinion,” Levy said. “It’s an exciting time to be a pollster, but we may see more errors.”
Reach Gazette reporter Zachary Matson at 395-3120, [email protected] or @zacharydmatson on Twitter.
GAZETTE COVERAGE
Ensure access to everything we do, today and every day, check out our subscribe page at DailyGazette.com/SubscribeMore from The Daily Gazette:
Categories: News