Monday, 30 May 2016

Brexit referendum: method and benchmarks

Our prediction method (announced in the previous text) rests primarily upon our Facebook survey, where we use a variety of Bayesian updating methodologies to filter out internal biases in order to offer the most precise prediction. In essence we ask our participants to express their preference of who they will vote for (e.g. Leave or Remain for the Brexit referendum), how much do they think their preferred choice will get (in percentages), and how much do they think other people will estimate that Leave or Remain could get. Depending on how well they estimate the prediction possibilities of their preferred choices we will formulate their predictive power and give higher weight to the better predictors. We call this method the Bayesian Adjusted Facebook Survey (BAFS).

In our first election prediction attempt, where we predicted the results of the 2015 general elections in Croatia, we found that our adjusted Facebook poll (AFP)[1] beat all other methods (ours and of other pollsters) and by a significant margin. Not only did it correctly predict the rise of a complete outlier in those elections, it also gave the closest estimates of the number of seats each party got. Our standard method, combining bias-adjusted polls and socio-economic data, projected a 9 seat difference between the two main competitors (67 to 58; in reality the result was 59-56), and a rather modest result of the outlier party which was projected to be third with 8 seats – it got 19 instead. Had we used the AFP we would have given 16 seats to the third party, and a much closer relationship between the first two parties (60-57). The remarkable success of the method, particularly given that it operated in a multiparty environment with roughly 10 parties with realistic chances of entering parliament (6 of which competed for the third party status, all newly founded within a year of the elections, with no prior electoral data), encouraged us to improve it further which is why we tweaked it into the BAFS.

In addition to our central prediction method we will use a series of benchmarks for comparison with our BAFS method. We hypothesize (quite modestly) that the BAFS will beat them all.

We will use the following benchmarks:

Adjusted polling average – we examine all the relevant pollster companies in the UK given their past performance in predicting the outcomes of the past two general elections (2015 and 2010), and one recent referendum – the Scottish independence referendum in 2014. We could go longer back in time and take into consideration the polls of local elections as well. However, we believe the more recent elections adequately encapsulate the shift in trend regarding polling methods in addition to its contemporary downsides. As far as local elections are concerned, we fear that they tend to be too specific and that predicting local outcomes and national outcomes are two different things. More precision on a local level need not translate into more precision on a national level. Given that the election of our concern is national (the EU referendum), it makes sense to focus only on the performance of national-level polls in the past. We are however open to discussion regarding this assumption.

In total we covered 516 polls from more than 20 different pollsters in the UK across the selected elections. Each pollster has been ranked according to its precision. The precision ranking is determined on a time scale (predictions closer to the election carry a greater weight) and a simple brier score is calculated to determine the forecasting accuracy of each pollster. Based on this ranking weights are assigned to all the pollsters. To calculate the final adjusted polling average we take all available national polls, adjust them according to timing, their sample size, whether or not they conduct an online or telephone poll, and their pre-determined ranking weight, and take the average score from all those weights. We also calculate the probability distribution for our final adjusted polling average.

Regular polling average – this will be the same as above, except it won’t be adjusted for any prior bias of the given poll nor will it be adjusted based on sample size. It is only adjusted based on timing (the more recent get a greater weight). We look at all the polls done at least two months before the last poll.

What UK Thinks Poll of polls – this is a poll averaging only the six most recent polls, done by a non-partisan website What UK Thinks, run by the NatCen Social Research agency. The structure of what goes in changes each week as new pollsters share new polling data. The method is simple averaging (it shows moving averages) without weighting anything. Here’s the intuition. 

Forecasting polls – these are polls based on asking the people to estimate how much one choice will get over another. They are different than regular polls as they don’t ask who you will vote for, but who you think the rest of the country will vote for. The information for this estimate is also gathered via the What UK Thinks website (see sample questions here, here, here, and here).  

Prediction markets – we use a total of seven betting markets. We use the estimates from PredictIt, PredictWise, Betfair, Pivit, Hypermind, Ig, and iPredict. They are also distributed on a time scale where recent predictions are given a greater weight. Each market is also given a weight based on the volume of trading, so that we can calculate and compare one single prediction from all the markets (as we do with the polls). The prediction markets, unlike the regular polls, don’t produce estimates of the total percentage one option will have over another. They instead offer probabilities that a given outcome will occur, so the comparison with the BAFS will be done purely on the basis of the probability distributions of an outcome.

Prediction models – if any. The idea is to examine the results of prediction models such as the ones done by Nate Silver and FiveThrityEight. However, so far FiveThirtyEight hasn’t done any predictions on the UK Brexit referendum (I guess they are preoccupied with the US primary and are probably staying away from the UK for now, given their poor result at the 2015 general election). One example of such models based purely on socio-economic data (without taking into consideration any polling data, so quite different from Silver) is the one done by a UK political science professor Matt Qvortrup where he racks it all up into a simple equation: Support for EU= 54.4 + Word-Dummy*11.5 + Inflation*2. – Years in Office*1.2.[2] Accordingly, his prediction is a 53.9% for the UK to Remain. We will try to find more such efforts to compare our method with.  

Superforcaster WoC – we utilize the wisdom of the superforecaster crowd. Superforecasters are a colloquial term for participants in Phillip Tetlock’s Good Judgement Project (GJP) (there’s even a book about them). The GJP was a part of a wider forecasting tournament organized by the US government agency IARPA following the intelligence community fiasco regarding the WMDs in Iraq. The government wanted to find whether or not there exists a more formidable way of making predictions. The GJP crowd (all volunteers, regular people, seldom experts) significantly outperformed everyone else several years in a row. Hence the title – superforecasters (there’s a number of other interesting facts about them – read more here, or buy the book). However superforecatsers are only a subset of more than 5000 forecasters who participate in the GJP. Given that we cannot really calculate and average out the performance of the top predictors within that crowd, we have to take the collective consensus forecast. Finally, similar to the betting markets, the GJP project doesn’t ask its participants to predict the actual voting percentage, it only asks them to gauge the probability of an event occurring. We therefore only compare the probability numbers in this benchmark.

Finally, we will calculate the mean of all the given benchmarks. That will be the final, last robustness test of the BAFS method.

So far, one month before the elections here is the rundown of the benchmark methods: (these will be updated over time)

Method
Remain
Leave
Adjusted polling average*
50.5
47.16
Regular polling average*
51.04
46.99
Poll of polls
54
46
Prediction models
53.9
46.1
Mean
52.36
46.56
                                   Note: updated as of 23rd May 2016.

The following table expresses it in terms of probabilities:

Method
Remain
Leave
Adjusted polling average
66.89
33.11
Regular polling average
67.13
32.87
Forecasting polls*
62.98
31.05
Prediction markets
74.85
25.15
Superforecaster WoC
77
23
Mean
69.77
29.04
                                   Note: updated as of 23rd May 2016.

* For the adjusted polling average, the regular polling average, and for the forecasting polls we have factored in the undecided voters as well.




[1] Note: this is not the same method as we use now, even though it was quite similar. 
[2] See his paper(s) for further clarification. 

Wednesday, 25 May 2016

Predicting the Brexit referendum

I'm proud to announce that last month I became an entrepreneur! Together with two of my friends and colleges, physicist Dejan Vinkovic and computer scientist Mile Sikic, we started a company called Oraclum Intelligence Systems Ltd., based in Cambridge, UK. The name of the game? Electoral forecasting. But also data visualization, big data analysis, and market research.

Essentially, we are a start-up in its R&D phase, and are at this point mostly concerned with experimental testing of our prediction models on real life data. And what better way to test a prediction method than elections!  

I already wrote about our earlier efforts in predicting the Croatian 2015 general election, but now our focus is on the international stage where the first big election coming up is the UK EU membership referendum (the popular 'Brexit'). After that we will focus our efforts and attention on the US 2016 Presidential election (and what is most certainly going to be a duel between Clinton and Trump - as I predicted back in January - I also gave Hillary a slight advantage over Trump at the time. We'll see how that goes). 

Anyway, over the next month my blogs will be mostly focused around Brexit and electoral predictions of the referendum. These will include a series of introductory texts on our methodology, the benchmarks we will be using for comparison (which will be regularly updated), the ranking of pollsters based on their historical performance, etc. And of course we will provide day-to-day coverage of our results two weeks prior to the referendum (which is held on June 23rd).

All these texts will be available on our official Oraclum Blog, plus I will also create a separate page on this blog where I will keep track of all these texts (similar to the page I have on the Eurozone crisis).

How Britain got to the Brexit referendum

A brief introduction into the political landscape so far. Back in January 2013, in response to mounting pressure from his own party and the upsurge in popularity of the eurosceptic UK Independence Party (UKIP), British Prime Minister David Cameron promised to his voters that if they re-elect his Conservative government he will give the citizens a chance to vote on the in-out EU referendum for the first time since 1975: "It is time for the British people to have their say. It is time to settle this European question in British politics." The date was set to be in 2017 at the latest. 

As a prelude to the 2015 general election campaign, he emphasized on numerous occasions his willingness to keep his referendum pledge, even announcing that the referendum might take place earlier than initially conceived

The campaign strategy worked. Conservatives were sworn in by a landslide electoral victory, much to the complete surprise of almost all UK pundits and almost every pollster. While everyone was predicting a very close election and a virtual tie between Labour and Conservatives, the Conservatives picked up 100 seats more than Labour which was enough for them to form a single-party government. Reinvigorated by this success the party quickly moved forward the EU Referendum Act. It was introduced to the House of Commons already in May 2015 (a few weeks after the elections), approved in December, with the official date (23rd June 2016) announced in February. The referendum question itself was designed to be quite clear, leaving no room for ambiguity: 


“Should the United Kingdom remain a member of the European Union or leave the European Union? 

  • Remain a member of the European Union 
  • Leave the European Union”
It is hard to say whether or not the pledge of an EU in-out referendum helped the Conservative party (there were certainly other things that led them to such an impressive and unexpected result, primarily the dismal performance of Labour and Liberal Democrats across the country), but it was a gamble the PM was willing to take. He kept his promise, even allowing individual party members to form their own opinions on the Brexit, not necessarily along official party lines. 

Cameron’s plan was to renegotiate Britain’s deal with the EU, primarily concerning immigration and welfare policies. In February he did just that, although many would disagree on the extent of his success, calling it a lukewarm deal at most, falling short of many of his promises. The deal is set to grant Britain a “special status” within the EU if they vote Remain. It ensures that Britain will not be a part of the path towards “an ever closer union”, that the financial sector is protected from further EU regulations, and that Britain is exempt from further bailouts of troubled eurozone nations (and is even to be reimbursed for funds used so far). Where it fell short of expectations was the migrant welfare payments and child benefits. Migrant workers will still be able to send child benefits back to their home countries, while new arrivals will gradually be able to get more benefits, the longer they stay. Some compromises were made by both sides of the bargaining table, however this hardly satisfied the eurosceptics back home. 

Today, the Conservatives are divided. The party leadership as well as the majority of government ministers support the Remain campaign. However, roughly half of Conservative MPs, 5 government ministers, and the former Mayor of London and prominent Conservative figure, Boris Johnson, all support the Leave campaign. It would seem that the Conservative voters accurately depict their party’s split – YouGov reports that the distinction is 44%-56% towards Leave. 

On the other hand the Labour party’s new leadership under Jeremy Corbyn has expressed its official position to support the Remain campaign, although political pundits have noted a slight reservation of Corbyn towards the EU (primarily based on his previous opinions of the EU). However, Labour voters are much more inclined towards the EU than their current party leader. 75% of them support Remain, while only 25% support Leave. LibDem voters are even more pro-EU (79-21), while on the other hand of the spectrum, UKIP voters are perfectly aligned with their party’s position (97% support Leave). 

Usually, when political party leaderships in a country announce their position towards a referendum (particularly on EU membership), the outcome is very often predictable – voters listen to their parties and vote accordingly. The same can actually be said of the current division regarding Brexit – voters do listen to their parties. The Conservative party is split (its official position is to be – neutral) and their voters act accordingly. UKIP, LibDems, and Labour are all relatively united towards the referendum question, so their voters also vote accordingly. It is this interesting dynamic operating within the Conservative party and the electorate in general that makes this referendum a difficult one to predict. After all, the majority of the polls are predicting a very close result, within the margin of error. 

The role of Oraclum

What is our stake at these elections? We are, above all, a non-partisan venture in its start-up R&D phase, interested in experimental testing of our models on real-life electoral data. We aim to use a Facebook survey of UK voters (more on that in the next text), along with our unique set of Bayesian forecasting methods to try and pick out the best and most precise prediction method. Essentially our motivation in this initial stage is purely scientific. We wish to uncover a successful prediction method using the power of social networks. After the Brexit referendum, we will apply the same methods on the forthcoming US Presidential elections in November 2016. 

In our Facebook survey we will not use any user data from Facebook directly or indirectly, only the data the users provide us in the survey itself. We will have no knowledge of voter preferences of any individual user, nor will we be able to find that out. 

The Facebook survey will be kick-started 10 days prior to the referendum, on 13th June, and will run up until the very last day when we will provide our final forecast. Our forecasts will show both the total predicted percentages and the probability distributions for both options. They will also show the distribution of preferences for the friends of each user (so that the user could see how his or her social network is behaving and who they, as a group, are voting for), as well as the aggregate predictions the survey respondents will be giving. 

Furthermore we will present our predictions in a map format, based on UK regions, where we will show the actual polling numbers, and our Bayesian adjusted version. 

We welcome all suggestions, comments, and criticism. 

In the next blog post, I will introduce you briefly to our method and the number of benchmarking methods which we will use for comparison. 

Sunday, 8 May 2016

Graph of the week: Race and money affect school performance

The New York Times brings the following interactive graphic (I encourage you to click on the link and try it out; you can track direct comparisons in performance by race and wealth - it's striking):

Click to enlarge. Source: NYT
There appears to be a large positive correlation between race, money and school performance. Kids coming from rich, white districts significantly outperform kids coming from poorer and/or Hispanic and black neighborhoods. The graph maps every school district in the US and compares the school performance of six graders (in reading and math). Even this simple correlation is very unveiling. There are three clusters clearly separated by both wealth and race. White kids, coming from upper and upper middle class families tend to be the only group that outperforms the average in their school grades. In fact, the very top performers include no single black or Hispanic district. (Bear in mind that the comparison aren't people, but school districts - so there might very well be top performers coming from outside the wealthier and/or white families, but since we're dealing with averages, the performances of all top performers get cancelled out.)

Even when the comparison is made across similar districts (in socioeconomic background) the results are the same. There is no non-white district that beats a predominantly white district (even if they are equally wealthy or poor): 

Comparing similar districts. Click to enlarge. Source: NYT
This says a lot about social mobility in the US. If a smart kid is born into a dominantly black or Hispanic (poorer) district, he or she will most likely have a much lower probability of good school performance and subsequent success than if the same kid is born into a dominantly white (richer) district. The environment these kids find themselves in is extremely important. It very often makes the key, invisible difference, between failure and success. 

In other words, kids born into poor black families are already by birth disenfranchised. They are born unequal, as they will lack the same opportunity as kids born in richer white neighborhoods. Environment matters. This is a sad truth about the US, as the equal opportunity assumption clearly does not apply. Sure, the geniuses will very often get picked out, but how many of these kids will ever get the chance to prove that they are geniuses? 

Gladwell had a lot to say about that in Outliers, however mostly relating the differences in income, not race. It's the way high and upper middle class families breed their kids for success that makes them more likely to achieve it. The differences can even be more subtle than that - for example, the way the kids spend their vacations. Gladwell compares the performance of richer and poorer kids (coming from richer and poorer families) during the school year and actually finds that poor kids don't lag in performance, they can actually surpass the rich kids during the year. This is (albeit partial) evidence that the schools work in enhancing student knowledge. The problem is the summer, where richer kids practice in prep for school, while poorer kids don't (and most likely spend their summers watching TV or playing). This yields an immediate high gap between the richer and the poorer kids, that the poor kids have a hard time catching up, even if their semester performances can improve. However, these comparisons are done within-district, not between them. The between comparison is due to a multitude of other factors; for example, richer parents will hire tutors, enroll their kinds in a number of extracurricular activities, and in due time, these kids will accumulate a higher human capital making the performance gap larger and larger. 

On the other hand, in poorer neighborhoods, the parents on average do not or cannot afford to encourage the same type of behavior. In addition, the schools in poorer districts, with a high concentration of poor students, usually also lack the funding to attract the better teachers or to provide the same facilities as schools in richer districts (e.g. computers and IT equipment). The reason is that they also lack the appropriate funding but also in most cases an incentive to change that. Consider the example of the schools that "beat the odds": 
In one school district that appears to have beaten the odds, Union City, N.J., students consistently performed about a third of a grade level above the national average on math and reading tests even though the median family income is just $37,000 and only 18 percent of parents have a bachelor’s degree. About 95 percent of the students are Hispanic, and the vast majority of students qualify for free or reduced-price lunches.
Silvia Abbato, the district’s superintendent, said she could not pinpoint any one action that had led to the better scores. She noted that the district uses federal funds to help pay for teachers to obtain graduate certifications as literacy specialists, and it sponsors biweekly parent nights with advice on homework help for children, nutrition and immigration status.
The district regularly revamps the curriculum and uses quick online tests to gauge where students need more help or whether teachers need to modify their approaches.
“It’s not something you can do overnight,” Ms. Abbato said. “We have been taking incremental steps everywhere.”
All this still doesn't explain the race effect. As the second graph shows, even for districts with the same economic standards, there is a clear difference in performance based on race. This isn't conclusive of a causal relationship however. Perhaps it's not race but something else that characterizes minority districts (like poor schools) that drives outcomes. Either way the picture is worrying and definitely not encouraging in light of US problems with higher inequality and the ever increasing lack of social mobility.

Wednesday, 4 May 2016

What I've been reading (vol. 6)

Alvin Roth (2015) Who Gets What - and Why? The New Economics of Matchmaking and Market Design. Houghton Mifflin Harcourt 

The Nobel Prize winner Alvin Roth summarizes what are basically his Nobel Prize winning findings in this fascinating book about how markets work and how diligent market design can make them work even better. In 2012 Alvin Roth co-won the Nobel Prize with Lloyd Shapley for their contributions in "the theory of stable allocations and the practice of market design". In other words the theory and practice of solving the coordination problem in assigning kidneys to patients, students to schools, doctors to hospitals. Shapley was responsible for the theoretical contributions back in the 1960s, and Roth was the one behind the actual applications several decades later - designing markets to solve the informational asymmetry problem and the matching problem. (Note: this is the second Nobel prize set that was awarded for matching and search theory. The first was awarded in 2010 to Dale Mortensen, Chris Pissarides, and Peter Diamond). 

A few words about matching. As it's name suggests matching is a derivative of the coordination problem, where some people are sellers of a good, and others are buyers, however they cannot usually 'find' each other. In other words, a matching market is one where the price mechanism doesn't usually clear the market. They don't work like regular markets where the price system is successful. On regular markets (think of the stock market or a supermarket) there is no courtship necessary. The sellers don't have to meet the buyers directly nor do they have to engage in any interaction. The price system will make sure both sides are satisfied with the transaction. A matching market on the other hand is a market for human skills not goods or services - labor markets, college admissions, relationships. In none of these does the price system perfectly match supply and demand. Companies usually hire the best workers not the cheapest ones (this depends on the type of job however), as do universities. Both sides of the interaction have to woo each other, signaling their competence on one hand, and their facilities, scholarships/salaries and opportunities on the other. Relationships obviously also depend on courtship. On a date each person is trying to signal their strengths, trying to impress. So whenever a price system doesn't clear the market, economists call this a matching market.  

Because of these specific characteristics, matching markets have to be designed a bit differently than regular markets which depend only on prices. Signaling that you want to work somewhere or go to a specific university does not mean you will end up there (or getting the women/man that you want). When there is a lack of kidney donors to make sure every patient gets a transplant, scarcity has to be solved by matching. And the best way of solving the scarcity and coordination problem on the matching market is by clever market design. Every market is based on rules, whether the stock market or the farmers market down the road - all have clearly established mechanisms as to how they operate. Throughout history many of these rules changed and adapted (usually to new technologies), in order for the market to operate more smoothly. So even the spontaneously created markets have their set of rules, and therefore have a specific market design. Roth makes a very convincing case that in order for a market to work well and provide the benefits to society, it requires intelligent and deliberate design. This is hardly a book advocating central planning. Far from it. It lauds markets and simply tries to find ways to fix them when they could perform better; to lower the informational asymmetry; to solve the matching problem; and to set up the very market where exchanges could take place.

So what does it take? In order to work markets need to have a lot of participants - they must be thick. After achieving thickness the next obvious problem that can arise is congestion, making it difficult to select the best alternative (on both sides of the spectrum). When this happens, the market agents can resort to strategic behavior and attempt to trick the system. A well designed market reduces the incentive for doing so. For example in allocating school seats or medical residencies, if the assignment rules are poorly designed participants strategize about expressing their true preferences (their first choice), knowing that they might not get it if it's listed as first choice. In a well designed matching market, this concern is removed and no one has an incentive to behave strategically (hiding their true preferences). 

The flavor that the book carries is that economics can be used solve real problems (that's how I saw it). Roth's most famous contribution to that idea, surveyed in chapter 3, is the voluntary kidney exchange he helped set up in the early 2000s, that has so far saved thousands of lives: the New England Program for Kidney Exchange (NEPKE). How does that work? The graphic below explains it: 
It's actually quite simple. Someone in your family (your wife e.g.) needs a kidney and you're willing to donate but you're not a match. So what NEPKE does is that it helps you find a matching donor whose kidney is compatible to your wife's, while your kidney is compatible with his. And there you have it - the swap is done and two lives are saved. You didn't have to introduce any new agents. You have the patients, and the altruistic donors. You've solved their asymmetric information problem, avoided the possibility of "gaming the system" or the use of money (which is illegal in kidney transplantation), made the market thick (it attracted a lot of donors and patients), quick (you've managed to avoid congestion), and safe enough for the people to participate. A perfect example of a successful market design. No wonder this won him the Nobel prize. 

In addition to kidney exchanges, he mentions a host of other examples of successful market design, all of which he helped to improve: a clearinghouse that matches US medical students to residency programs in hospitals (it's actually a very good example of preference aggregation), assignment of students to nurseries and public schools, Airbnb renting, high-frequency trading, auctions, etc. He even describes the failures and all the potential issues in each matching market he worked on. It's fascinating to read about an economist with so much practical experience in improving every-day life (and even saving lives!). If you're an economist the book will give you this glimmer of hope that all is not lost for our profession, and that economics too can be used to make our lives better. 

James Surowiecki (2005) The Wisdom of Crowds. Why the Many Are Smarter Than the Few. Abacus

This book can be summarized in a single sentence: Crowds are smarter than individuals. Controversial, isn't it? To be more specific we can also put it this way: large crowds are smarter (better forecasters, better problem solvers, better innovators) than even the selected elite experts, however this is obviously subject to several conditions. The first thing is for individuals within a group to act independently one from another, meaning that they must reach their decisions/predictions on their own, without any peer pressure or group influence. The second thing is for the group to be diverse in opinion. This means that each person has to have some private info to bring to the group. The more diversity in opinion, the better chance that the given issue can be solved (or precisely predicted). Related to this is decentralization, individuals in the group have to draw their information from specialized, local knowledge. And finally, there has to be a mechanism to aggregate all these individual ideas/thoughts/predictions into a single, collective decision/prediction. 


Sounds like a decent theory, there's even some empirical backing to it, given, of course, that all of the aforementioned conditions are satisfied. It's easy to disregard the hypothesis of crowd 'wisdom' calling upon some very basic psychological biases and heuristics we are prone to. Like herding in financial markets (i.e. any type of panic or hype on stock markets), or the availability heuristic and how we tend to rate only the salient information as more important thus disabling us from seeing the 'bigger picture'. Or how we tend to be clueless about statistics and basic statistical inference. Or how we fall victims to anchoring, hindsight bias, illusions of validity, and a host of other things that make us, from a psychological point of view - quite irrational in our values and judgement

The author doesn't go too deep in uncovering some of the psychological trails of our behavior that make us very bad at predicting things, but somehow, as he claims, our individual shortcomings aren't that important at all. All it matters is that we as a group, collectively, can indeed outperform the experts (which actually isn't too difficult to do, just recall Tetlock's study on the faulties of expert predictions), under the conditions that each person gives his/her prediction independently of the group, that the group is diverse enough and that it benefits from local information. With the proper aggregation mechanism to pull them all together, the predictions should be rather precise. The examples include guessing the weight of an ox, finding a lost submarine in the middle of the ocean, election prediction markets (like the Iowa Electronic Market), the price mechanism, the security community failures, etc. 

In all these cases large crowds of diverse individuals, each operating from his/her own specific standpoint are shown to good forecasters. So in a nutshell, our aggregated collective opinion can somehow even out all our individual shortcomings. As if each of our error terms (our variances if you wish) cancel each other out once aggregated. For example, for a single dimension issue have enough people pulling (biased) one way, and enough pulling (biased) the other way, their biases cancel out and we're left with the average which is bound to be true. 

Not quite, actually. Statistics doesn't really work that way. Mathematically, the sum of individual variances will only increase the total variance, not decrease it (well, depending on the correlation between individual responses), particularly if we move away from a single-dimension issue (which most of them are). But OK, despite this and some other problems, the argumentation in support of the hypothesis is interesting enough. There is certainly some merit to it, as it has been scientifically tested in multiple occasions and proven to be correct (particularly in forecasting). It's just that the author doesn't really present it this way (which some would call boring), but tries to make an interesting story based on selected examples that support the argument. In a way it's similar to Gladwell's books (both writers actually featured as columnists for the New Yorker) - it's a collection of interesting stories. So just like Gladwell, don't judge it based on its scientific merits (this can take out a lot of fun out of reading), but based on how interesting and enjoyable it is while reading it. After all, it's not that the idea itself doesn't stand. It does, but many conditions must be satisfied. Yet the subtitle: "Why the many are smarter than the few, given that several conditions are satisfied" is not all that fun and won't make you buy the book. 

Wednesday, 27 April 2016

Graph of the week: Immigration: perception vs reality

An interesting chart from the Economist about the perception and reality on the total number of Muslims in European countries. It's striking how big the overestimation gap is in the selected countries (do have in mind that these are the official figures; perhaps there are undocumented Muslim immigrants that increase the actual numbers, but I sincerely doubt that the real numbers are anywhere near the perception). 
The Economist
So why is there such a huge overestimation (between 3 to 8 times!) in Europe about the total number of Muslim population in their countries? In general, the anti-immigration sentiment rests on the same concept: that there are too many immigrants (many of which without any legal documents), "taking our jobs". The perception on the total number of minority immigrants in Western countries is very similar to this one from above. Their numbers are vastly overestimated. 

Perhaps the reason for this is that minority immigrants tend to cluster in specific areas, primarily due to the cultural and language barrier of their new environment. Plus there is the perception held by the majority of Westerners that Islamic values are not compatible with the West. This makes the indigenous population hostile to newcomers, particularly if coming from Islamic countries (example: the current refugee crisis). As a consequence this turns entire neighborhoods of large cities into minority group ghettos, that tend to be correlated with higher incidences of crime and are considered unwelcoming and dangerous areas to live in. 

The problem is actually cultural assimilation. Nothing is being done to assimilate the immigrants which are (in most cases) forced to start working in the grey economy, usually for their compatriots who came earlier. Greece has a particularly big problem in this department, as there is an entire underground market for undocumented workers. The immigrants get the low-paying jobs that aren't registered in the economy, and are getting paid over-the-counter. This is bad for both sides: the government doesn't get its tax revenues, while the workers are forced to work for scrap without any job benefits or security, all at the mercy of their local "landlords", or whatever we should call them. Naturally some of them will be encouraged to take up crime (or in the extreme case - terrorism). Failing to assimilate immigrants, in addition to causing social problems, is alienating an entire group of  people virtually preventing them from ever adapting to the Western style of life. Perhaps it's not entirely their fault for not being able to assimilate. It's just that they've never been given an opportunity to do so. After all, the motivation for emigrating to the West is to enjoy a better lifestyle, to get the opportunity they never had the chance to take back home. They didn't come to work for nothing and to be treated like slaves. They didn't come to cause violence. The closed environment they ended up in pushes them in that direction. In the end this is hurting the economy in more ways than one. In addition to unpaid taxes, the labor force doesn't really benefit from undocumented workers. 

So how should Western governments accommodate this? Scattering immigrants across the country or across different neighborhoods within a city, instead of letting them form clusters is one way of doing so. This is particularly applicable in the current European state of 'controlled' immigration (controlled in a sense that EU governments are documenting each immigrant and allocating them to a specific area). Another is to clamp down on tax evasion (and simplifying the tax code) which lowers the incentives to work in the grey economy, and as a consequence resort to crime and violence. None of these may be enough to assimilate new immigrants completely, however complete assimilation never does actually happen to first-generation immigrants. It's the second generation and beyond (the ones that are raised in the new environment) that become fully integrated into new culture. The reason why this isn't happening in many European countries isn't the impossibility of Muslim cultural assimilation, it's the specific clustering they amass to which never really presents any opportunity for the newcomers, making them think that being in the West is not that special after all. This fuels anger. On both sides actually. And hence the overestimated perception.