Once again, polls forecasting the outcome of a U.S. election were way off target. Why are pollsters so often wrong? Can polls be made more accurate?


A 1947 survey for the Gallup Poll at the University of Iowa library in Iowa City, Iowa, 11 December 2012 (AP Photo/Ryan J. Foley)

Four years ago, spectacular failures of gauging popular opinion and forecasting the outcome of a British referendum and American elections prompted questions on both sides of the Atlantic.

What went wrong? Are polls useful? And can they be improved?

In the wake of the U.S. presidential elections in November, those questions are being raised once again.

In the British referendum in June 2016, voters were asked whether they wanted their country to remain in the European Union or leave it. Respected polling organizations, including YouGov and Ipso/Mori, predicted that a comfortable majority would vote to stay. The forecast: 52% for stay, 48% for leave.

The actual result of the Brexit vote was the exact opposite: 52% voted for leave, 48% for remain. The outcome came as a shock to millions of Britons and Europeans, not to mention the polling industry.

Pollsters and forecasters were way off base in the U.S. election.

In the United States, pollsters and forecasters were in near unanimous agreement that the Democratic candidate, Hillary Clinton, would become America’s next president in elections five months after Brexit. Her Republican opponent, Donald Trump, was considered an extreme long shot. He won.

This November, U.S. prognosticators got the most important race right: Trump lost to Democratic challenger Joe Biden. But they vastly underestimated support for Trump. On November 3, election day, the highly-regarded website FiveThirtyEight, which focuses on opinion poll analysis, saw Biden ahead by 8.4%. The actual result was less than half, 3.9%.

An array of polls were equally wrong in projecting that Democrats would expand their majority in the House of Representatives by up to 15 seats. In fact, they lost 10.

In 2016, the American Association for Public Opinion Research described the elections as “a jarring event for polling in the United States.” A 104-page report, widely dubbed an autopsy, blamed bad state polls for the shocking upset while national polls were largely accurate. (While Clinton won the popular vote, Trump won in enough states to gain the presidency by way of the Electoral College.)

Fewer people are answering pollsters’ questions.

Autopsies for the 2020 elections are still being written, but several problems complicating the work of pollsters, aggregators and forecasters have been obvious for years. (Pollsters interview people to get their opinions, aggregators draw conclusions from a variety of polls and forecasters make predictions).

For polls on elections, the problem starts with the key question pollsters tend to ask: “Who would you vote for if the elections were held tomorrow?” The result is a snapshot in time, and minds can change with events.

Among several trends that make accurate surveys more difficult and explain wrong calls, one stands out: a steady and seemingly unstoppable decline in the number of people willing to answer the questions of pollsters. Cell phone users are particularly prone to ignore calls from pollsters.

The overall response rate, to use the technical term, dropped from 90% in the 1930s, when people saw it as a civic duty to participate in polls, to just 6% in 2018, according to the Washington-based Pew Research Center, a think tank.

What does that mean in practice? Two weeks after the U.S. elections, David Hill, the director of a research firm, gave an example in an essay in the Washington Post. “To complete 1,510 interviews over several weeks, we had to call 136,688 voters in … Florida. Only one in 90-odd voters would speak with our interviewers.”

This is a problem for which the polling industry has yet to find a solution. In 2020, there were 239 million Americans eligible to vote. To select 1,000 or 1,500 — typical samples — who are representative of the whole is difficult at the best of times, even more difficult if certain groups refuse to be interviewed.

According to Hill, many of those who declined interviews fit the profile of likely Trump supporters.

Is internet polling better?

In a post-election panel discussion, Lee Miringoff,  the director of the Marist Institute for Public Opinion, explained the concept of a small sample to reflect the views of a country of 328 million this way:

“Intuitively, it doesn’t feel that 1,000 people could tell you much of anything about what the country thinks. But imagine a huge vat of soup and you stir it all up and you want to know what that tastes like, one spoonful will work. And that’s what we are doing. We are trying to get a flavor of the electorate.”

The obvious question, not asked at the panel: How does that soup taste without salt, seasoning and other important ingredients?

The alternative method to traditional methods is internet polling, a subject of lively debate among experts unhappy with the deluge of criticism that usually follows wrong forecasts, of which there have been many apart from the 2016 shockers in Britain and the United States.

Election polls have been wrong in many countries.

Poll-based predictions were completely wrong on the Israeli election in 2015, the Greek bailout referendum the same year and Scotland’s independence referendum in 2014. But botched polls have a history that goes back decades.

Take the U.S. presidential elections of 1948. Its unexpected outcome was captured in a now famous photograph that shows a widely smiling Harry Truman holding up the front page of the Chicago Daily Tribune newspaper with the banner headline, “Dewey Defeats Truman.”


(AP photo via Wikipedia)

Numerous polls and pundits had been so certain that the Republican candidate, Thomas Dewey, would win that the editors of the Tribune saw no need to wait for the official result.

Then and now, the relationship between polls and the media has been symbiotic. Polls are the basis of what is known as horse-race reporting. “The news media see every poll like an addict sees a new fix,” two respected political scientists, Norman Ornstein and Alan Abramowitz, complained in the wake of the 2016 U.S. elections.

So, are there solutions to these long-standing problems? Among those under debate are online polls, but so far there is no secure way to ascertain that participants are registered to vote. One new idea is to text interview requests to a random sample of people on the voter rolls.

Will there be more accurate election polls in the future? Don’t hold your breath.

Three questions to consider:

  1. From what you know about polling, do you think it is good for democracy?
  2. The author mentions that the candidate with most votes can lose the elections. Why is that?
  3. Why do you think national polls in the United States are more accurate than state polls?

Bernd Debusmann began his international career with Reuters in his native Germany and then moved to postings in Eastern Europe, the Middle East, Africa, Latin America and the United States. For years, he covered mostly conflict and war and reported from more than 100 countries. He was shot twice in the course of his work: once covering a night battle in the center of Beirut and once in an assassination attempt prompted by his reporting on Syria. He now writes from Washington on international affairs.

Share This
DecodersDecoder: Why are election polls so often wrong?