"A new forecasting method for the Brexit referendum"
The following blog has been published first at the Oxford University Politics Blog. Here is just the first part explaining the logic behind our BAFS method.
Is it possible to have a more accurate prediction by asking people how confident they are that their preferred choice will win the day?
Is it possible to have a more accurate prediction by asking people how confident they are that their preferred choice will win the day?
As the
Brexit referendum date approaches, the uncertainty regarding its outcome is
increasing. And, so are concerns about the precision of the polls. The
forecasts are, once again, suggesting a very close result. Ever since the
general election of May 2015, criticism against pollsters has been rampant.
They have been accused of complacency, herding, of making sampling errors, and even of deliberate manipulation of their results.
The UK is
hardly the only country where pollsters are swiftly losing their reputation.
With the rise of online polls, proper sampling can be extremely
difficult. Online polls are based on self-selection of the respondents, making
them non-random and hence biased towards a particular voter group (the young, the better educated,
the urban population, etc.). On the other hand, the potential
sample for traditional telephone (live interview) polls
is in sharp decline making them less and less reliable. Telephone interviews are usually done
during the day biasing the results towards stay-at-home moms, retirees, and the
unemployed, while most people, for some reason, do not respond to mobile phone
surveys as eagerly as they once did to landline surveys. With all this uncertainty it is hard
to gauge which poll(ster) should we trust and to judge the quality of different
prediction methods.
However, what if the answer to ‘what
is the best prediction method’ lies in asking people not only who they will
vote for, but also who they think will win (as ‘citizen
forecasters’[1]), and more importantly, how they
feel about who other people think
will win? Sounds convoluted? It is actually quite simple.
There are a
number of scientific methods out there that aim to uncover how people form
opinions and make choices. Elections are just one of the many choices people
make. When deciding who to vote for, people usually succumb to their standard ideological or
otherwise embedded preferences. However, they also carry an
internal signal which tells them how much chance their preferred choice has. In
other words, they think about how other people will vote. This is why, as game
theory teaches us, people
tend to vote strategically and do not always pick their first
choice, but opt for the second or third, only to prevent their least preferred
option from winning.
When
pollsters make surveys they are only interested in figuring out the present
state of the people’s ideological preferences. They have no idea on why someone
made the choice they made. And if the polling results are close, the standard
saying is: “the undecided will decide the election”. What
if we could figure out how the undecided will vote, even if we do not know
their ideological preferences?
One such
method, focused on uncovering how people think about elections, is the Bayesian Adjusted Facebook Survey, or BAFS for short. The BAFS method is
first and foremost an Internet poll. It uses the social networks between
friends on Facebook to conduct a survey among them. The survey asks the
participants to express: 1) their vote preference (e.g. Leave or Remain); 2) how much do
they think their preferred choice will get (in percentages); and
3) how likely they think other people will estimate that Leave or Remain will win
the day.
Let’s clarify the logic behind this. Each individual holds some prior
knowledge as to what he or she thinks the final outcome will be. This knowledge
can be based on current polls, or drawn from the information held by their
friends and people they find more informed about politics. Based on this it is
possible to draw upon the wisdom of crowds where one searches for informed
individuals thus bypassing the necessity of the representative sample. However,
what if the crowd is systematically biased? For example, many in the UK believed
that the 2015 election would yield a hung parliament – even Murr’s (2016) citizen forecasters (although in relative terms the citizen
forecaster model was the most precise). In other words, information from the
polls is creating a distorted perception of reality which is returned back to
the crowd biasing their internal perception. To overcome this, we need to see
how much individuals within the crowd are diverging from the opinion polls, but
also from their internal networks of friends.
Depending on how well they estimate
the prediction possibilities of their preferred choices (compared
to what the polls are saying), BAFS formulates their predictive power and gives a higher weight to
the better predictors (e.g. if the polls are predicting a 52%-48% outcome, a
person estimating that one choice will get, say, 80% is given an insignificant
weight). Group predictions can be completely wrong of course, as closed groups
tend to suffer from confirmation bias. On the aggregate however, there is a way
to get the most out of people’s individual opinions, no matter how internally biased
they are. The Internet makes all of them easily accessible for these kinds of
experiments, even if the sampling is non-random.
[1] See Murr, A.E. (2016) “The
wisdom of crowds: What do citizens forecast for the 2015 British General
Election?” Electoral Studies 41
(2016) 283-288.
Comments
Post a Comment