Tuesday, 28 June 2016

BREXIT, THE REACTION: democratic deficit, the falling elites, and the future of the EU

After hearing the results of the Brexit referendum on early Friday morning, my initial reaction was this comment on Facebook:

After cooling down over the next few days and reading about Brexit from a number of perspectives, I have to say that everything I said initially - still holds. I will carefully explain each point.

First, the democratic deficit problem. This is, sort of, my PhD topic, meaning that I'll be writing quite a lot about it over the next few years. Before explaining why this outcome is a deficit of democracy we must first define the concept of democratic deficit (or perhaps even - democracy failure). Basically, the democratic deficit concept concerns the interaction between politicians and partial interest groups, a legitimate consequence of electoral competition and political freedom to express and fight for one’s interests, and whether or not this interaction results in adverse economic outcomes (in my PhD I will be focusing on linking the failures of democracy to the rise in income and wealth inequality). The point is that in some cases democracies fail to erect the necessary institutions to prevent the favouritism of partial interests, the consequence of which are usually cronyism, corruption, nepotism, clientelism, etc. To make this a bit more relevant, consider the following excerpt from an earlier essay of mine
"The recent economic crisis has exposed all of democracy’s deficits. Dysfunctionality and political gridlocks that only worsened the crisis became a standard in the United States and Europe. Government bank bailouts and rapid accumulation of debt stroke a huge blow to the positive perception of Western democracy and capitalism itself. In addition, the Western model of democracy is facing a serious problem with rising inequality and to some extent the lack of social mobility. Various interest groups dominate the political spectrum in biasing budgetary expenditures towards their preferred goals leaving relatively less money for redistribution programs aimed at the poorer ends of the society, particularly in terms of education and health care. Politicians themselves engage in direct or indirect vote buying (either through gerrymandering or by giving direct concessions to their support groups), budget-maximizing bureaucrats add to the rise in government spending which isn't targeted towards the general population, while political campaigns are financed heavily by the corporate sector desiring prone legislation. All of this adds concerns over a poor image of the Western-style democracy. It has failed to become fully robust to cronyism."
So how is this linked to the Brexit? In two ways: first, the 'angry' Leave voters which consider themselves the losers of globalization (the 'immigrants taking our jobs argument'). Globalization always has its winners and losers, this is inevitable. As the theory of international trade teaches us the losers are usually the holders of the deficient resource within a country - in the case of the West: the blue-collar working class. Naturally as the working class keeps losing their jobs they have a tendency to blame both immigrants and their domestic political elites which have failed to protect them from these misfortunes. The European Union is a natural enemy of anti-globalizationalists. It represents everything they fear - in particular their intrinsic loss of competitiveness as the market attracts the better skilled workers to replace them. This is true for every country in the West, without exception. There is growing discontent from the working classes regarding the benefits of globalization and its prime manifestation - the European Union. They feel alienated as they did not reap any visible nor direct benefits from it. Naturally, they will rebel and oppose globalization and will support any political platform (far-right or far-left) that delivers the same bold criticism. 

However, the working class voters often fail to realize that the reason for their economic misfortunes lie beyond globalization itself and can be traced within their domestic institutions, in particular within their domestic political elites. How exactly? 

I've written about this before as well. Globalization, in addition to bringing enormous opportunities for wealth creation, in some instances led to the abrupt rise of new powerful elites - banking, political, and media - which threatened the sustainability of the system. The principles of competition were replaced by the rapid accumulation of power within a handful of selected business groups (recall the Leveson inquiry or the LIBOR scandal). British political elites forgot the distinction between being pro-business (favoring monopolies, oligopolies and picking winners) and pro-market (supporting competition and equal opportunity). This was not, as many falsely believe, a consequence of Thatcher's reign as she vehemently opposed any partial big business interests that undermined the interest of the customer, as well as the rapid accumulation of power within any industry, particularly within banking or politics. The post-Thatcher political leadership forgot those lessons and set the country on a path to increasing cronyism, rising inequality and declining social mobility, all of which was further emphasized by the financial crisis, the post-crisis austerity approach, and the subsequent long (double-dip) recovery. Luckily, none of these undermined Britain's vast accumulated wealth, but they had certainly limited its growth and split its population into the winners and losers of globalization. 

The second way in which Brexit represents a democratic failure is the huge political (and even constitutional) instability and economic uncertainty arising as a consequence. The markets' reaction is only a natural response to the huge uncertainty surrounding Britain and the EU. So the paradox here is that the legitimate (and direct) decision of the electorate (the very essence of democracy) has undermined the economic and political stability of the country. Some say this won't last too long and that the long term consequences are likely to be positive for Britain, however it's hard to see how that will be achieved given that the very country, the United Kingdom, may fall apart itself (if the Scottish and Northern Irish enact their independence referendums). 

So, the failure of elected political elites to suppress cronyism and the subsequent decision of the electorate that undermines the country's political and economic stability can both be considered good examples of how democracies need not always yield optimal outcomes. 

Second, the decline of trust in establishment elites, and the rise of populism. This is closely linked to the previous points. The failure of the political establishment to prevent cronyism and the rise of new powerful elites has severely deteriorated the people's trust in institutions and expert opinion. Whether domestic (the Treasury, Bank of England, prominent UK Universities,  and think tanks), or international (the IMF, World Bank, any EU-related institution) - they were all considered to be untrustworthy during the Brexit debate. Why? One reason was the use of populist ideology to unsettle the electorate accusing the experts (particularly the economists) of having a vested interest as they tend to receive a lot of money from the EU. This argument worked well having in mind the anger against the establishment, as expert opinion was seen as a justification of establishment policies and was therefore considered to be untrue. Whatever argument was being thrown into the debate, not a single one had the allure of impartiality. On the other hand the establishment (and the experts to some extent) made their own mistakes of ignoring the concerns of their electorate, in particular the low and middle classes. This led to the further deterioration in the relationship between the elites and the "non-elites", implying even lower levels of trust among the non-elites. Unfortunately, there seems to be no signs of this damaged relationship to be improved any time soon. 

Third, Cameron's political legacy. Boy, did he mess up! This time last year he was in heaven. He had just secured his second mandate by a landslide electoral victory, removing his Conservative party of the LibDem coalition shackles, and was on course to bring a truly positive political legacy to Britain. After winning two general elections, and securing a major political win with the Scottish referendum, he decided to tie his political career, not to mention the future of his country, to the Brexit referendum. Just like Blair destroyed his positive legacy with the Iraq war, Cameron did perhaps even worse with the EU referendum. A man who had it all he ever wanted, just lost everything in a crazy gamble with his own party and what is mostly his own electorate. Not unlucky, just plain stupid.

Fourth, Boris Johnson as the new PM, the spread of populism, and the tectonic changes that await us. Boris Johnson is in a very difficult position currently. He appears to have gotten exactly what he wanted - Leave won and now he's the first favorite to take over the party and the Prime-ministership in October. Which is four years earlier than he'd hoped. However his victory seems awfully bitter to him as well (judging from his reactions and, well, his body language). It is as he had thought: "there is no way Leave will win, but I will make myself the leading figure of the campaign, and use this to take over the party with its looming euro-skeptics after Cameron quits" (which he announced to do prior to 2020, and said he was not seeking a third term). Sounds like a perfect plan, particularly as Boris would have no problem winning the generals in 2020 from the lackluster Labour leader Jeremy Corbyn (who is also likely to face a leadership challenge soon). Except that it backfired - Leave won! So Boris got what he wanted, he's becoming the PM, why isn't he happy? Perhaps because he realizes he probably won't be PM for too long. Time for another Batman quote (this time from the Joker's conversation with Harvey Dent):

As for the consequences and the spread of populism, my immediate fears are not with the EU, but on the upcoming Presidential elections in the US in November. There you have another typical establishment candidate, Hilary Clinton, which would in normal times, given her enormous experience (First Lady, Senator, Secretary of State, etc.) be a shoe-in to lock the victory. However she is going up against the worst possible manifestation of cheap populism, lies, and low-class appeal - Donald Trump. Who would have figured that an 80s style billionaire would speak to the mind of the poorest better than any socialist out there (including Bernie Sanders). Trump's messages are almost the same as those of the Leave campaign, as is his likely voting population. Perhaps the Democrats will learn from the mistakes of the Remainers by not having a negative campaign and not trying to scare the voters as to what will happen if Trump wins. A positive message is necessary. 

Fifth, the EU devolution, the reaction of EU policymakers, and the potential separation of Scotland and Northern Ireland from Britain (this last one I failed to touch upon in my initial comment). Despite the initial reaction of Europe's far-right and far-left parties demanding EU membership referendums in other countries, I am still confident that the EU will survive this shock. As it survived the sovereign debt crisis, the boiling point of November 2011 (which was several times worse in its destabilizing effect than the current referendum result), not to mention several highly likely Grexit possibilities. Europe's strength has been tested on a numerous occasions in the past 6-7 years, and even though this might seem as the decisive blow, I highly doubt it. Particularly given the reactions not only of Europe's leaders, but many others across the EU (this could however be selection bias, as only the most vocal express their concerns about the EU). 

This is not to forget that Europe needs a deep, deep reform of its institutions. I've personally been emphasizing this since I started the blog. The EU has completely alienated itself from the people. Even more so than any political leadership in any of its members. No one sees the EU as the convergence machine for prosperity anymore, while the benefits of the free trade area and the guarantee of peace on the continent are mostly being taken for granted. Instead the EU is, quite rightly, being portrayed as the bureaucratic leviathan with endless regulatory and legal requirements that stifle business and innovation, and that are to the common EU citizen nothing but an unnecessary burden. The EU's bureaucratic regime is turning into the worst manifestation of Kafka's and Orwell's novels. It is no wonder the people have an urge to fight against it. 

I sincerely hope the EU political elites get the message from Brexit. That could be the biggest positive that comes out of it - that the EU reforms and starts re-emphasizing innovation and trade, and starts to focus on lowering within-country (as well as between-country) inequality and increasing the living standards of its people. But that's certainly not all. EU law needs to be altered as the biggest objection against the EU is that it is being run by unelected technocrats. This is true, and it need to change. EU Parliament elections are not enough since Parliament has very little political clout in the EU. On the other hand the EU budget itself has a purely developmental goal (agricultural subsidies, EU funds, education and skills, etc.) and is not really a mechanism for economic policy. So despite being overreaching and intrusive in its regulatory patterns, the EU at the same time signals vast incompetence and inability to deal with the people's problems. Just recall its responsiveness to the sovereign debt crisis, or its reactions regarding the Ukraine crisis. 

In order to do all this, perhaps more federalism is needed. Now that the UK, the biggest opponent to "an ever closer Union", is out of the picture, this may very well be achieved. However, the danger is again the same - will this new federalism enable the unelected bureaucrats with more power which would imply more of the same or will it force them into promoting the true goals of Europe, as envisioned back in the 1950s? Given that the EU project is still evolving, meaning that it is still in the phase of trial and error, we can think of most efforts in the past 6-7 years as being an example of error. But that doesn't mean we should give up on it quite yet. 

Finally, what is to be left of the UK after this whole thing boils down? Possible just England and Wales. The biggest problem in the whole post-referendum debris is the vast political uncertainty. Literary no one knows what comes next: will Article 50 be invoked?; when and how?; what will the negotiations bring?; is the referendum outcome fully binding and will the "Bregret" crowd succeed in overturning it?; will Scotland block the Brexit vote?; will they have their own referendum which will bring them back into the EU?; will Northern Ireland do the same? - these are all questions no one has an answer to yet. One thing is certain though - in the short run, Britain will suffer. Political instability always gives rise to economic instability and possibly a recession, depending on how long it takes to resolve the situation. And from the signals we're currently getting, it will take quite some time to resolve it. 

Friday, 24 June 2016

Brexit: The analysis of results and predictions

On yesterday’s historic referendum, Britain has voted Leave. It was decided by a small margin, 51.9% to 48.1% in favour of Leave, with turnout at a high 72.2% (highest since the 1990s). The outcome dealt a decisive blow against PM David Cameron who announced his resignation in the morning. The markets have had a strong negative reaction, with the markets plummeting, and the pound sharply declining to its 30-year low against the dollar. It was an outcome the markets failed to anticipate (or were hoping to avoid), which explains the investors’ abrupt reactions.

Read the initial reactions: The Economist is in state of disbelief, trying to find a solution, and describing what happens next (invoking Article 50 of the Lisbon Treaty). They also have this interesting piece on the fallen legacy of David Cameron. The FT dreads about "Britain's leap into the dark", and keeps warning on the negative economic consequences. Martin Wolf also had a good comment. The BBC brings reactions from abroad, discusses the possibility of another Scottish referendum, and sums up eight reasons as to why the Leave campaign won. Other reactions are in the same direction: "a split nation", "what will the uncertain future bring", "what have we done?", and of course - the celebrations of the Brexiters. 

How did we do with our predictions?

Even though our prediction of the most likely outcome was a narrow victory for Remain (50.5 to 49.5), our model correctly anticipated that Leave has almost the same probability of winning. We gave the Leave option a 47.7% chance, admittedly more than any other model, expressing clearly that our prediction was nothing short of a coin toss.

As could be seen from our probability distribution graph below, the highest probability for the exact result of 49.5% for Leave (the one we decided to go with) was 7.53%, while the probability for the actual outcome of 51.9% for Leave was a close 6.91%, according to the model. This is a painfully small difference that comes down to pure luck in the end. Or as we said – a coin toss.

Source: Oraclum Intelligence Systems Ltd.
Turns out – the coin fell on the other side. Nevertheless, we stayed within our margin of error and can honestly say that we came really close (off by 2.4%; see the graph below). We knew that the last few days have been hectic and that the Remain campaign was catching up (high turnout suggests so), but it was obviously not enough to overturn the result. Leave started to lead two weeks before the referendum, and just as our model was showing an increasing chance of Leave over the weekend, a new flock of polls switched some voters’ opinions towards a likely Remain victory by Wednesday. In addition to the trend switch in our model we also failed to receive a larger sample, which proved to be decisive in the end.

Our results in greater detail are available in the graph below. It represents the comparison of our predictions to the actual results for the UK as a whole, and for each region (in other words, the calibration of the model). It shows that most of our predictions fall within the 3% confidence interval, and almost all of them (except Northern Ireland) fall within the 5% confidence interval. The conclusion is that we have a well calibrated model.

Model calibration (click to enlarge) Source: Oraclum Intelligence Systems Ltd.
This is even more impressive given our very small overall sample size (N=350). However even with such a small sample we were able to come really close to the actual prediction, beating a significant amount of other prediction models. Obviously the small sample size induced larger errors when it came down to certain regions (e.g. Northern Ireland or Yorkshire and Humberside), but it was remarkable how well the model performed even with so few survey respondents. Even if it did eventually predict the wrong outcome.

This was a model in its experimental phase (it still is), and the entire process is a learning curve for us. We will adapt and adjust, attempting to make our prediction method arguably the best one out there. It certainly has the potential to do that.

How did the benchmarks do?

It appears that the simplest model turned out to be the best one. The Adjusted polling average (APA), taking only the value of the polls two weeks prior to the referendum gave Leave 51% and Remain close 48.9%. This doesn’t mean individual pollsters did good, but that pollsters as a group did good (remember, polls are not predictions, they are merely representations of preferences at a given point in time). The problem with the individual pollsters was still a lot of uncertainty, such as double digits for undecided voters, even the day before the referendum. This is hardly their fault of course, but it tells us that looking at pollsters as a group is somewhat better than looking at a single individual pollster, no matter when they publish their results.

However, the Poll of polls (taking only the six last ones) was off, as it was 52:48 in favour of Remain (they’ve updated that yesterday just after I published the post, so I didn’t have time to change it). And the expert forecasting models from Number Cruncher Politics and Elections Etc both failed by 4% and 5% respectively.

Most surprisingly, the prediction markets and the betting markets have all failed significantly! As have the superforecasters. It turns out that putting your money where your mouth is still is not enough for good predictions. At least not when it comes to Britain. Prediction markets in some cases were giving an over 80% chance to Remain at the day of the referendum. In this case ours was the only model predicting a much more uncertain outcome.

Thursday, 23 June 2016

Brexit: the final prediction!

Our final prediction is a close victory for Remain. According to our BAFS method Remain is expected to receive a vote share of  50.5%, giving it a 52.3% chance of winning.

Click to enlarge. Source: Oraclum Intelligence Systems Ltd. 

Our prediction produces a probability distribution shown on the graph above (see explanation to the right), presenting a range of likely scenarios for the given vote shares. Over the past week we have consistently been providing estimates of the final vote share and the likelihood of each outcome. Daily changes and close results simply reflect the high levels of uncertainty and ambiguity following the EU referendum. However our prediction survey (the BAFS) has noticed a slight change of trend in favour of Remain in the past two days.

This is why our final prediction gives a slight edge towards Remain, and predicts a vote share of 50.5% for Remain, and 49.5% for Leave (the graph below represents the vote share of Leave, denoted as ‘votes for Brexit’ – a higher expected vote share for Brexit decreases the probability of Remain as the final outcome). The probabilities for both outcomes are also quite close, standing at 52.3% for Remain, and 47.7% for Leave. This means that 52% of the time when polling is so close and when the people themselves expect and predict a very close result, the Remain outcome would win. 48% of the time it wouldn’t.
Click to enlarge. Source: Oraclum Intelligence Systems Ltd. 
Vote share for Leave (votes for Brexit). The grey area describes the average error. As the sample size grew, the average error decreased.
Click to enlarge. Source: Oraclum Intelligence Systems Ltd. 
A timeline of probabilities for both outcomes since the start of our survey.

Why such low probabilities?

Due to a relatively high margin of error (± 5.3%). However given that this is not a standard survey with a representative sample, the error term does not mean much in this case (there is a whole debate about the controversy behind the margin of error – read it here).

Nevertheless, why is the error so high? Primarily because of very high levels of uncertainty among the actual polls, as well as among the predictions our respondents gave us. Also our sample size was relatively low (more on that below). If the error was around 1%, then the probabilities would have been much higher in favour of Remain (above 70%). This is closer to what the prediction markets and the superforecasters are saying.

But this means the prediction is as good as a coin toss?

Indeed. As it stands, the race is nothing short of a coin toss.

The problem in predicting such close outcomes is the measure of relative success of the prediction method. Usually being correct within a 3% margin is considered to be quite precise. In this case nothing short of a 1% margin will be permissible, which is essentially ridiculous and extremely difficult to guessestimate.

Having said that, we do hope our prediction method will be correct within its margin of error, but more importantly that it has correctly predicted the final outcome.

How does the method work?
The BAFS method (Bayesian Adjusted Facebook Survey) is a prediction method based on its own unique poll where we ask the people not only to express their preferences, but also who they think will win and how they feel about who other people think will win. This makes it different than regular polls which are simply an expression of voter preferences at a given point in time.

The obvious difference between standard polling and our method was noticeable during our initial predictions where we had a very small sample (around a 100 respondents) which was obviously biased towards one option (it gave Remain a 66% vote share), but we were still able to produce very reliable and realistic forecasts (see the graph below, the first results pointed to a slight victory for Remain, even with very high margins of error – initially over 10%). The later variations in our predictions were small even as the sample size increased threefold.

We follow here the logic of Murr’s (2011, 2015, 2016) citizen forecaster models where even a small sample within each constituency (21 average respondents per constituency for group forecasts) is enough to provide viable estimates of the final outcome across the constituencies.

The BAFS method, similar to the citizen forecaster model, is therefore relatively robust to sample size, as well as the self-selection problem (all of our respondents voluntarily participated in the survey). Both of these issues undermine the quality of standard polling, but in this case it was shown to have little or no effect. The BAFS method, utilizing the wisdom of crowds approach (group level forecasting), benefited from a diverse, decentralized, and independent group of respondents (seeSurowiecki, 2004) which gave us very realistic estimates of the final outcome. This implies that our prediction is likely to be quite close to the actual outcome on 23rd June.

How do we compare to other methods?

As we announced last month, in addition to our central prediction method we will use a series of benchmarks for comparison with our BAFS method. In the following tables we have summarized the relevant methods. For more about each method please read here. (Note: We have decided to introduce two new methods, fromNumber Cruncher Politics and from Elections Etc., both of which have proven track records in previous elections).

* For the adjusted polling average, the regular polling average, and for the forecasting polls we have factored in the undecided voters as well.

As it stands, we tend to be quite close to the predictions for the vote share (polls are slightly in favour of Leave, while other prediction methods are slightly in favour of Remain), but we tend to be a bit far from the probability estimates (the reasons for which are described above – if our error was lower, our probabilities would have also been around 70:30 in favour of Remain).

Mapping the results

Finally, here is how the map of the UK is supposed to look like if our predictions are correct:

Click to enlarge. Source: Oraclum Intelligence Systems Ltd. 
And here is the table by regions:

Source: Oraclum Intelligence Systems Ltd. 
Copyright for all visuals: Oraclum Intelligence Systems Ltd. 

Wednesday, 15 June 2016

The Bayesian Adjusted Facebook Survey has started!

Today we have started with our Brexit survey. I invite all of my UK readers to give it a go. You will be helping us test our new BAFS prediction method. In other words you will be helping us make a better prediction for the upcoming UK EU referendum. As you may or may not know, the polls are showing the country is split. At this point, a week before the referendum the uncertainty regarding the potential outcome is sky-high. With our survey, which will be running until the final day before the referendum, we hope to reduce some of this uncertainty by utilizing our unique BAFS prediction method and forecasting the exact percentage each of the two options will get. 

The trick with our survey, as opposed to all others, is that we make our prediction by asking the people not only who they will vote for, but who they think will win, and how they think others will estimate who will win. For further clarification, read more here.

This basically means that we are not worried about the non-representatives of our sample, nor of the self-selection problem the survey is facing. Neither of these will bias the prediction. We hope to have our first results within a day or two, and will keep updating them every day until the day before the referendum. 

Also, after you vote, you can see how your friends voted (on aggregate, not individually), and how popular/influential you are within your network - but only if you share the survey directly through the app. So don't forget to share, either on Facebook or Twitter. Here's the link to the survey itself

Monday, 13 June 2016

Brexit: Ranking UK pollsters

Opinion pollsters in the UK came under a fierce line of attack from the public and the media following their joint failure to accurately predict the results of the 2015 UK general election. Months and weeks before the May election the polls were predicting a hung parliament and a virtual tie between Conservatives and Labour, where the outcome would have been another coalition government (a number of combinations were discussed, even the grand coalition between Labour and Conservatives), or even a minority government.

The results show that the pollsters, on average, missed the difference between the two parties by 6.8%, which translated into about 100 seats. What was supposed to be one of the closest elections in British history turned out to be a landslide victory for the Conservatives. Naturally, inquiries were made, accusing pollsters of complacency, herding, and deliberate manipulation of their samples. And while there was certainly something that went wrong in the sampling methods of the pollsters, we will not go into too much detail as to what that was.

In fact we wish to vindicate some pollsters by offering, for the first time in the UK, an unbiased ranking of UK pollsters.

And here it is:

Number of polls analyzed
Joint within-between index 2015
Joint within-between index 2014
Joint within-between index 2010
Final weighting index
(for ranking)
Precision index
Ipsos MORI
Angus Reid
Harris Interactive
Lord Ashcroft
Source of data: UK Polling Report. All calculations (and mistakes) are our own. 
*Note: SurveyMonkey was included despite having done only one poll due to their sample size of 18,000 respondents and since they were the only ones to have perfectly predicted the result of the 2015 general election.

Our rankings are based on a somewhat technical but still easy to understand methodological approach summarized in detail in the text below. It has its drawbacks, which is why we welcome all comments, suggestions and criticism. We will periodically update our ranking, our method, and hopefully include even more data (local and national), all with the goal of producing a standardized, unbiased overview into the performance of opinion pollsters in the UK.

We hope that our rankings stir a positive discussion on the quality of opinion pollsters in the UK, and we welcome and encourage the usage of our rankings data to other scientists, journalists, forecasters, and forecasting enthusiasts.

Note also that in the ranking list we omit the British Election Study (BES), which uses a far better methodology than other pollsters – a face-to-face random sample survey (the gist of it is that they randomly select eligible voters to get a representative sample of the UK population, and then they repeatedly contact those people to do the survey; you can read more about it here). This has enabled them to give out one of the most precise predictions of the 2015 general election (they gave the Conservatives an 8% margin of victory). However there is a problem – the survey has been (and usually is) done after the elections, meaning that it cannot be used as a prediction tool. Because of this, instead of grouping it with the others we use the BES only as a post-election benchmark.

Methodology and motivation

Our main forecasting method to be applied during the course of the Brexit referendum campaign, the Bayesian Adjusted Facebook Survey (BAFS), will be additionally tested using a series of benchmarks. The most precise benchmark that we attempt to use is the Adjusted polling average (APA) method. In fact, the main motivation for our own ranking of pollsters in the UK is to complement this particular method[1]. As emphasized in a previous post our APA benchmark adjusts all current Brexit referendum polls not only with respect to timing and sample size, but also with respect to its relative performance and past accuracy. We formulate a joint weight of timing (the more recent the poll, the greater the weight), sample size (the greater the sample size, the greater the weight), whether the poll was done online or via telephone, and the ranking for each poll, allowing us to calculate the final weighted average across all polls[2] in a given time frame (which is in this case since the beginning of 2016).

The weighted average calculation gives us the percentages for Remain (currently around 43%), Leave (currently around 41%), and undecided (around 15%). To get the final numbers which we report in our APA benchmark, we factor in the undecided votes as well.

How do we produce our rankings?

The rankings are based on past performance of pollsters for three earlier elections, the 2015 general election, the 2014 Scottish referendum, and the 2010 general elections. In total we observed 480 polls from 15 pollsters (not all of which participated in all three elections). We realize the sample could have been bigger by including local and previous general elections, however given that many pollsters from 10 years ago don’t produce polls anymore (while the majority of those operating in 2015 still produce them now for the Brexit referendum), and given that local elections are quite specific, we focus only on these three national elections. We admit that the sample should be bigger and will think about including the local polling outcomes, adjusted for their type. There is also the issue of methodological standard of each pollster which we don’t take into account, as we are only interested in the relative performance each pollster had in the previous elections.

Given that almost all the pollsters failed to predict the outcome of the 2015 general election, we look at the performance between pollsters as well, in order to avoid penalizing them too much for this failure. If no one saw it coming, they are all equally excused, to a certain extent. If however a few did predict correctly, the penalization against all others is more significant. We therefore jointly adjust the within accuracy (the accuracy of an individual pollster with respect to the final outcome) and the between accuracy (the accuracy of an individual pollster with respect to the accuracy of the group).

1. Within accuracy

To calculate the precision of pollsters in earlier elections we again have to assign weights for timing and sample size, in the same way as earlier described (older polls are less important, greater sample size is more important). Both of these factors are then summed up into the total weight for a given poll across all pollsters. We then take each individual pollster and calculate its weighted average (as before, this is the sum of the product of all its polls and their sample and timing weights, divided by the sum of all weights – see footnote 2). By doing so we can calculate the average error each pollster made in a given election. This is done for all three elections in our sample allowing us to calculate their within accuracy for each election. We calculate the average error for an individual pollster as the simple deviation between the weighted average polling result and the actual result for the margin between the first two parties in the elections (e.g. Conservatives and Labour)[3]. Or in plain English, how well they guessed the difference between the winner and the runner-up.

2. Between accuracy

After determining our within index, we estimate the accuracy between pollsters (by how much they beat each other) and sum them both into a single accuracy index. To do this we first calculate the average error for all pollsters during a single election. We then simply subtract the joint error from each individual error. This represents our between index: the greater the value, the better the pollster did against all others (note: the value can be negative).

3. Joint within-between ranking

To get our joint within-between index we simply sum up the two, thereby lowering the penalization across all pollsters if and when all of them missed. In this case those who missed less than others get a higher value improving their overall performance and ranking them higher on the scale.

We repeat the same procedure across all three elections and produce two final measures of accuracy. The first is the final weighting index (which we use for the ranking itself and whose values we use as an input in the Brexit polls), and the second is the precision index. The difference between the two is that the precision index does not factor in the number of elections, whereas the final index does. The precision index is thus the simple average of the within-between indices, while the final index is the sum of all three divided by the total number of elections we observed regardless of how many of them the pollster participated in. The two are the same if a pollster participated in all three elections, but they differ if the pollster participated in less than three elections.

For example, consider the fourth ranked SurveyMonkey. They have the highest precision grade because they were the only ones in the 2015 election to predict the result almost perfectly (a 6% margin Conservative victory). However since they only participated in a single election, they do not come up on top in the final weighting index. Pollsters that operated across all three elections give us a possibility to measure their consistency, a luxury we do not have for single-election players.

In other words perhaps SurveyMonkey was just lucky, particularly since they only conducted a single survey prior to that election. However, given that the survey was done in the week prior to election day (from April 30th to May 6th; election day was May 7th) and given that it had over 18,000 respondents, our guess is that it was not all down to luck. Either way given that their entry to the race was late and a one-off shot (similar to our BAFS effort actually), if or when they do produce their estimate for Brexit one day prior to the referendum, we will surely factor them in and give them a high weight. Not as high as their precision index suggests, but high enough. The same is with several other pollsters that were operational over the course of a single election, meaning that they got a lower weight overall, regardless of their single-election accuracy.

To conclude, the numbers reported under the final weighting index column represent the ranking weight that we talked about in the beginning of this text. Combined with the timing and sample size weights, it helps us calculate the final weighted average of all polls thereby helping us configure our strongest benchmark, the adjusted polling average.

[1] The rankings that we report here will not be a part of our BAFS method.
[2] Calculated as xiwi wi , where xi is an individual poll and wi the corresponding weight. wi is calculated as the sum of three weights, for timing (using an adjusted exponential decay formula, decreasing from 4 to 0, where half-life is defined by t1/2 = τ ln(2), for sample size (N/1000), and the ranking weight (described in the text).
[3] Define xi  as the difference between total predicted vote share of party A (vA) and party B (vB) for pollster i, and y as the difference between the actual vote share of the two parties. Assume A was the winner, and B was the runner-up. The within accuracy of pollster (zi) is then defined simply as zi = |xi – y| . The closer the value of zi  is to 0, the more accurate the pollster. From this we calculate the within index as Iw = (10 – zi)