Wednesday, 27 April 2016

Graph of the week: Immigration: perception vs reality

An interesting chart from the Economist about the perception and reality on the total number of Muslims in European countries. It's striking how big the overestimation gap is in the selected countries (do have in mind that these are the official figures; perhaps there are undocumented Muslim immigrants that increase the actual numbers, but I sincerely doubt that the real numbers are anywhere near the perception). 
The Economist
So why is there such a huge overestimation (between 3 to 8 times!) in Europe about the total number of Muslim population in their countries? In general, the anti-immigration sentiment rests on the same concept: that there are too many immigrants (many of which without any legal documents), "taking our jobs". The perception on the total number of minority immigrants in Western countries is very similar to this one from above. Their numbers are vastly overestimated. 

Perhaps the reason for this is that minority immigrants tend to cluster in specific areas, primarily due to the cultural and language barrier of their new environment. Plus there is the perception held by the majority of Westerners that Islamic values are not compatible with the West. This makes the indigenous population hostile to newcomers, particularly if coming from Islamic countries (example: the current refugee crisis). As a consequence this turns entire neighborhoods of large cities into minority group ghettos, that tend to be correlated with higher incidences of crime and are considered unwelcoming and dangerous areas to live in. 

The problem is actually cultural assimilation. Nothing is being done to assimilate the immigrants which are (in most cases) forced to start working in the grey economy, usually for their compatriots who came earlier. Greece has a particularly big problem in this department, as there is an entire underground market for undocumented workers. The immigrants get the low-paying jobs that aren't registered in the economy, and are getting paid over-the-counter. This is bad for both sides: the government doesn't get its tax revenues, while the workers are forced to work for scrap without any job benefits or security, all at the mercy of their local "landlords", or whatever we should call them. Naturally some of them will be encouraged to take up crime (or in the extreme case - terrorism). Failing to assimilate immigrants, in addition to causing social problems, is alienating an entire group of  people virtually preventing them from ever adapting to the Western style of life. Perhaps it's not entirely their fault for not being able to assimilate. It's just that they've never been given an opportunity to do so. After all, the motivation for emigrating to the West is to enjoy a better lifestyle, to get the opportunity they never had the chance to take back home. They didn't come to work for nothing and to be treated like slaves. They didn't come to cause violence. The closed environment they ended up in pushes them in that direction. In the end this is hurting the economy in more ways than one. In addition to unpaid taxes, the labor force doesn't really benefit from undocumented workers. 

So how should Western governments accommodate this? Scattering immigrants across the country or across different neighborhoods within a city, instead of letting them form clusters is one way of doing so. This is particularly applicable in the current European state of 'controlled' immigration (controlled in a sense that EU governments are documenting each immigrant and allocating them to a specific area). Another is to clamp down on tax evasion (and simplifying the tax code) which lowers the incentives to work in the grey economy, and as a consequence resort to crime and violence. None of these may be enough to assimilate new immigrants completely, however complete assimilation never does actually happen to first-generation immigrants. It's the second generation and beyond (the ones that are raised in the new environment) that become fully integrated into new culture. The reason why this isn't happening in many European countries isn't the impossibility of Muslim cultural assimilation, it's the specific clustering they amass to which never really presents any opportunity for the newcomers, making them think that being in the West is not that special after all. This fuels anger. On both sides actually. And hence the overestimated perception. 

Saturday, 23 April 2016

What I've been reading (vol. 5)

I'm running behind with the reviews (still keeping track with the book readings though), so I'll publish the next two in space of only a week. Today it will be Gladwell's three books: Outliers, Tipping Point, and Blink (in the order that I've read them), while for next time it will be Roth's Who Gets What, and Surowiecki's Wisdom of Crowds

Gladwell, Malcolm (2008) Outliers. The Story of Success. Little Brown

Of all his books, in my opinion, this one ranks top. In Outliers, his third book, Gladwell wants to redefine our understanding of success. The brilliant writer that he is, he takes us through a fascinating journey linking together the little bits and pieces into an overreaching yet slightly simplistic theory about luck, opportunity, hard work, intelligence, heritage, and cultural legacy, all of which are important factors that explain why some people succeed (and are thus considered "outliers").  

Although this is far from a scientific treatise, the author occasionally calls upon scientific evidence, and constructs his arguments so convincingly that you might as well think that all of it is actually backed by experimental evidence. It’s not, but in this book, it doesn’t even matter. The narrative is simply that good. The point, from what I understood, is not to provide a comprehensive scientific study, but rather to offer some food for thought. It’s mostly about observing patterns. What excites me about the book’s narrative is the fact that the author moves away from the standard survivorship bias, in which successful people are thought to be successful because of their competence, skills, or ability. That certainly has made the difference, but, to put it in mathematical terms, skills, intelligence, or competence are necessary, but not sufficient conditions for success.

Gladwell tries to paint the picture of what are the sufficient conditions. What is it that makes the successful people differ from the rest? A whole range of things, actually. From their upbringing (family and cultural heritage), to how long they’ve nurtured their skill-set (the famous 10,000-hour rule - the amount of time that takes you to become an expert at something - quite a controversial hypothesis actually, see here, here, here, here and here), to the very date and year they were born in (and thus exposed to the specific conditions that made them rich). Satisfying just one of these factors is not enough. Many people come from the same cultural upbringing, many work hard, many were born into the same generation, went to the same school, university, or hung out with people like Bill Gates or Steve Jobs, but only a few actually succeeded. In the end, it comes down to the specific and diligent combination of all these factors, and yet this still doesn’t necessarily have to be enough. Finding oneself at the right place at the right time is much more crucial than we would think. 

And that is, in my opinion, the greatest contribution of this book. Just like Taleb’s Fooled by Randomness (there are a lot of differences between the two, in style, approach, and the message they convey, but they both offer a similar line of reasoning), it shows us that luck is crucial and that success can actually be purely random. Out of a sample of 7 billion people, some of us are bound to be rock stars, presidents and world leaders, Hollywood actors, super-rich businessmen, state-of-the-art athletes, stellar scientists, etc. It’s exactly what the law of really large numbers tells us. The fact that being a brilliant scientist, super successful businessman, admired world leader, or a brilliant athlete is so difficult given the enormous competition, and the fact that there are only so few of such extraordinary people throughout history, by the law of really large numbers there must be individuals that will satisfy those conditions. In other words, outliers are a natural occurrence, they will always exist.

And if you might think that your success wasn’t random, that it was purely due to your hard work, fast-paced thinking, high intelligence, unique competence and skillfulness, you are wrong. Surely all those things helped (they were the sufficient conditions), but there is a multitude of invisible factors that you simply aren’t (or even cannot) be aware of that gave you the opportunity to use your unique ability and build yourself into what you are today. 

Opportunity is the theme of the first part of the book. It tells us that extraordinary individuals need an opportunity to succeed, and that the difference between them and the rest is that they take their opportunities. Here is where ability and quick thinking matters – the fact that you’ve recognized an opportunity once it presented itself to you. Many are faced with an opportunity, many come from a nurturing environment and favorable family background, but only a few really exploit it to the limit. It works in the other direction as well; many very intelligent, very competent, and very skillful individuals never get the proper opportunity. Their hard work will get them their rewards, they are able to achieve a decent living standard in most cases, but they won’t become the stellar success they ought to have been. A great example that Gladwell uses in this case is the experimental study done by Lewis Terman. In 1921 Terman gave IQ tests to students in California’s elementary schools with an aim to find the true geniuses. Out of 250,000 students he recognized 1470 children with an IQ greater than 140 (confirmed by three separate tests, each new test only being given to the group of top performers). Over the rest of his life Terman tracked his geniuses to see how they will succeed in life. The point was to answer whether or not high IQ is a prerequisite for success. It turns out – it’s not. Yes, some of them turned out to be successful (some became authors, some judges, some university professors, businessmen, politicians, etc.), but not nearly as much as Terman had hoped. No one won a Nobel Prize (two students that later on won the Prize were rejected in the screening process; their IQs weren’t high enough), no one became President, no one reached national prominence in the way they were “supposed to”. 

In fact the positive relationship between IQ and success only works up until a certain point – for about 120 (enough to get you through a competitive graduate degree). In other words, the difference between the IQ of 120 and 100 is much bigger than the difference between 140 and 180 for example. Sure the person with IQ of 180 is smarter than a person with 140, but not that smarter. Gladwell gives a good additional example: height and basketball players. Being tall doesn’t guarantee that you will be a good basketball player. But it can help. The same is with high IQ. It can help, but it does not guarantee success. All it takes is to be smart enough to be successful (Gladwell also cites the university degrees of Nobel Prize winners in Medicine and Chemistry – although the list does contain the usual Ivy League suspects, there are many mediocre universities on the list as well). 

Not surprisingly, the patterns of success from the group of Terman’s geniuses had much more to do with family background and the different opportunities these kids received than their IQ. If the sample is divided into three key groups based on the levels of their career success, this becomes evident. The best performing geniuses benefited heavily from family background. They all came from middle and upper class families with highly educated parents. The worst performing group came from poorer families with a lack of education. These super smart kids were simply unlucky to be born in a different environment. The reason why their environment is so constraining to success is an entirely different topic that the book only briefly touches upon. The social mobility debate has a lot more to say about why this happens, why the poor get disenfranchised because they are poor. But the book does get you thinking about this, that’s for sure. 

There are many more different stories in the book, from Jewish garment manufacturers turning into lawyers and doctors, to why Southerners have a short fuse, to Korean pilots, Asian math whizzes (this one is particularly interesting, and has a lot to do with our perception of hard work), and how our education system is shaping the opportunities for us. I’ll leave it to the reader to uncover many of these on his/her own.

Gladwell, Malcolm: The Tipping Point: How Little Things Can Make a Big Difference (2000)
& Blink: The Power of Thinking Without Thinking (2005), both by Little Brown


The following two I’ll present briefly and jointly. They are both compelling pieces of non-fiction literature, but I found the Outlier book much more applicable to the stuff that I'm interested in. 

In Tipping Point, Gladwell talks about social epidemics. How and why do some ideas, trends and social movements spread so quickly (in Internet lingo we would say “go viral”) and transcend into global phenomena with far reaching and paradigm shifting consequences. According to Gladwell all of this happens with a small, almost trivial change, he calls the tipping point. Just like a massive wildfire can be triggered with a simple match in a dry forest, or just like any epidemic rests upon the ambiguity of patient zero, so can ideas and cultural patterns spread quickly. He seems to suggest that change doesn’t crawl up to us slowly and without notice, it happens in small but sharp steps which he attributes to a tipping point, the start of it all. 

Gladwell recognizes three basic rules of such epidemics. The law of the few, the stickiness factor, and the power of context. So basically, for an idea to spread you first need three types of people: the connectors (the “network hubs” who spread the story quickly onto their group), the mavens (the knowledge-accumulating information specialists who start the word of mouth epidemics purely due to their unique exposure to information), or the salesmen (you’ve guessed it – the persuaders). In addition to having (or being) one of these types of people, you still need other factors to transfer the message across. The stickiness factor – the specific content that makes the message interesting (or important), and, most importantly, context (we are highly sensitive to our environment and react accordingly). Context is what makes the whole thing hard to stage or predict. Virality doesn’t really have a formula (try making a youtube video that becomes a viral hit). But to be fair, Gladwell wrote this back in 1999, so before the viral online revolutions. Therefore, in today’s times, recognizing the actual tipping point in order to influence and predict trends is virtually impossible. Even if you have fulfilled all three "rules" of social epidemics. 

In Blink, another pop-psychology bestseller, it’s all about how fast and based on what information we make our judgments and decisions (how we "thin-slice", i.e. use limited info to construct conclusions). This one is his weakest effort, I feel. Not because the stories are bad, or the narrative is boring. It’s just that he puts a too strong emphasis on the benefits of our intuitive thinking. Psychologists like Daniel Kahneman have clearly proven how our intuitive thinking (System 1) can get us into trouble. Sure, it’s good most of the time, but in certain other times, it fails completely forcing us to undertake some really poor decisions. 

And this comes amid Gladwell’s very own examples of how making quick decisions can turn out bad. So on one hand we are facing examples of when rapid decision-making (in 2 seconds) beats dwelling over something for long periods of time whilst examining the many options at our disposal, but on the other hand, rash decisions aren’t really superior to deliberate thinking. It seems a bit contradictory at certain points, and it seems to lack certain coherence to the argument he’s trying to make. In my opinion investing time to read Kahneman’s book is much better to understand how we think. Read Blink after that one. You’ll recognize the flaws in the arguments, avoid confusion, and enjoy some other interesting examples. 

Finally, just like in Outliers, in both of these books most of Gladwell's conclusions aren’t really backed by scientific evidence (e.g. the decrease in the New York crime rate), just his own wit and engaging thinking. He tells good stories, and sure, he sometimes picks the anecdotal evidence to support his arguments, and his conclusions are often far-fetched, but they’re still interesting enough to think about them, and from an entirely different angle. This alone makes the books a worthwhile reading. They make you think.

Thursday, 31 March 2016

The war on science: How the Internet exposed the failure of our education system?

In this blog post I will make a brief digression from my usual economics topics. 

A year has passed since National Geographic published an issue with the following disturbing cover, featuring a couple of conjectures all too familiar and all too frustrating to professional scientists and science enthusiasts:
"Why Do Many Reasonable People Doubt Science?",
National Geographic, March 2015
In a really good op-ed, author Joel Achenbach of the Washington Post goes through a variety of misconceptions and doubts modern science has to fight against. The most obvious examples are listed on the cover. These include - but are not limited to - (i) the non-existence of climate change (never mind the thousands of state-of-the-art research papers that proved otherwise. Don't trust Al Gore on this, check out for yourself here, herehere, or here), (ii) that GMO is evil (what does this even mean? Again, the scientific evidence is quite clear on this), (iii) that the moon landing was fake (!) (Classic conspiracy theory. It's ridiculous even to discuss it), (iv) that evolution never happened (!!) (It keeps getting worse and worse), and of course (v) the typical confusion of cause and effect in that vaccinations lead to autism (despite a hundred-year success of modern medicine in preventing diseases such as polio, measles, mumps, rubella, smallpox, etc. Again, the evidence is overwhelming: here, here, here, or the vast amount of articles available here. And if anyone uses the argument that all the kids that have autism have been vaccinated, smack them across the face (hypothetically) and say, yes, but all the other kids that don't have autism also got vaccinated! - a person familiar with the idea of selection bias would never ever make such a ridiculous claim).

Worse of all, the war on science hasn't evaporated. If anything, it is gaining momentum. Other, more salient, issues have arisen in the mean time, but Internet forums, blogs and websites still get filled with quack ideas with no sign of abating. Regardless of the overwhelming scientific evidence, doubters, as they are colloquially referred to, still reject what was only a few decades ago considered to be universally accepted truth – such as the moon landing, vaccinations, or evolution. This type of ignorance is not only worrying, it’s also dangerous. Take the Ebola panic for example (or any similar supposed pandemic threat such as the recent Zika virus scare). The only way to spread this virus is via direct contact with the bodily fluid of the disease carrier (which was a posthumous custom in certain African tribes). The scientific community was quick to dismiss any possibility of the virus to mutate into a pandemic. This however didn’t stop the general population nor the media to make catastrophic predictions about the virus mutating and presenting an imminent threat to the human race (a good example of the availability heuristic).

Quacks, quacks everywhere 

When panic and ignorance triumph over science, policies change. In democracies politicians follow the will of the voters - at least to some extent, when the issue is of high enough salience to make an electoral difference. So when electorates get overwhelmed by the veil of ignorance, the demand for politicians supporting such ideas increases and eventually, the supply adjusts. Ignorance gives rise to politicians that advocate radical and ridiculous ideas (to which the US is currently exposed). This is particularly dangerous for faulty ideas in economics and politics. A series of social experiments with communism and National Socialism should have been more than enough to convince the vast majority of the population of the obviously wrong approach. Not according to (albeit obscure) websites and forums where such ideas are alive and kicking. What’s even worse is to see such ideas manifested into party politics all across Europe. Forget anger, ignorance is the problem

Economics is not immune from quack ideas either. Whether it’s monetary policy in terms of printing money to pay off ones national debt or simply to make everyone better off, the import-export mercantilist fallacy, or anti-immigration policies, regardless what decades of empirical economic research suggests, the policies applied are often substantially different. You will never see a self-respecting, non-charlatan economist advocate protectionism, oppose immigration, or suggest printing money to reach prosperity. Monetary policy misconceptions are particularly dangerous, as things can go wrong very quickly when you mess with currencies, money supply or interest rates. It takes but a few weeks (even days) for the nations’ savings to get whipped out (recall Argentina in 2001), for the interest rates on loans to hike, or for hyperinflation to send the economy down the drain (particularly in small open economies).

In the natural sciences things are only slightly better. The medical profession is still relatively untouched by the anti-vaccination movement, as is the theory of evolution taught in schools, but it's only a matter of time before this changes. Sooner or later there will be a political movement advocating abandoning vaccination. All it takes is to form an interest group and get a wealthy backer to fund your lobbying activities. In democracies it really is that easy (relatively speaking).

One thing in common to all branches of science (natural and social) are the accusations being made against those struggling to protect the scientific truth. Speaking about climate change will label you a left-wing environmentalist fanatic; speaking pro GM food will label you a globalizationist, advocate of the new world order, and possibly an enemy of European farmers. Arguing pro vaccinations, you’re likely to be accused of protecting the pharmaceutical companies; become a slave to corporate profits; a mercenary for hire with an aim to spread confusion among the people. Arguing against printing money, you will probably be accused of being a mercenary of bank(st)ers. It’s though defending reason these days. 

Blame the internet?

Unfortunately it seems the internet revolution bears a large chunk of responsibility for this outbreak of ignorance. People literally get educated from Facebook and Twitter statuses. From the comment sections of articles. From internet forums. Without ever reading a single book (not to mention a paper) on the topic, they so easily disregard crude scientific evidence. 

Never has there been such a high amount of information available to the general public and on the other hand such a low amount of general knowledge. Never before have more people believed and amplified the role of scientific quacks as they do today. Never have these ridicules of modern science garnered more attention than today, having modern technologies at their disposal with the widest possible audience within reach, only a click away.

Perhaps the congestion of information is causing this decline in knowledge - people simply choose the easiest form of information available to them. It's much easier to close the information gap by succumbing to one's prior set of biases and beliefs. When you look at this from an economic lens, it's actually somewhat rational - people choose the most effective way to lower the informational asymmetry. If you happen to stumble upon a source of information that confirms all of your prior beliefs and misjudgements about a certain topic, you're much more inclined to immediately accept this as an unconditional truth than to try and seek opposing arguments and carefully weight both sides of the story before forming an opinion. Not to mention examining the existing literature, gathering empirical evidence, conducting experiments, etc. After all, who has the time do that? Scientists, that's who! It is, after all, their job description. 

The internet became a platform for anyone to write and argue whatever they want. And in this confusion of information it’s often very difficult to detach fact from fiction. Signal from noise. Particularly for those sparsely educated. The failure of the education system in many countries is only now showing the full depth of its downfall. No one with a high school degree should make a claim that evolution is a disproven theory, or that vaccinations lead to autism. Before, we simply weren’t aware of the vast amount of ignorance around us. Now anyone can read the comment sections and forum entries only to reach an inevitable conclusion that the education system has hopelessly failed in its primary goal – educate the population. 

Win the war – educate! 

So what's left for those on the defense line of science? Most scientists are not loud, media-loving creatures. They shy away from media attention and like it best when they're left alone to do their work. This is the career choice they’ve made. But their voices should be heard. Much more than the voices of attention-loving, exhibitionist quacks supported by bemused politicians. 

To win the war against science, the education reform should be a starting point. Education reform in this case implies a curriculum focused on explaining how the scientific process works and how it makes the world better off. The basic curriculum, starting from primary schools to high schools, should be focused on developing the cognitive capacity of our children to understand the world around them and be better equipped to explain, understand and adapt to the process of constant change in the world; both natural and social. 

Economics should be introduced to the curriculum as well. Or at least the basics about money and finance. No one teaches our children about loans, savings and interest rates. The level of financial knowledge is surprisingly low given the consumerist society we live in where every single household either saves money or has a loan.

Finally, the war against science cannot be won over the short run. Changing the education system means we have to wait for decades to see how it phases out and will it result in a more knowledgeable society. Today we can see that the system we had in the past failed to adapt to the rise of internet technology where actual knowledge was quickly replaced by shovel-made ideas pushed to the people through obscure websites, exploiting the information time gap. What we need to do now is to focus on reforming the education system to bridge this gap – to use the internet as a tool in favor of disseminating knowledge, instead of its current role in contributing to the spread of ignorance.

Monday, 21 March 2016

Behavioral economics; or What I've been reading (vol. 4)

Today's book review is all about behavioral economics. This relatively new branch of economics (originated in the 70s, reached prominence in the past few decades), has been a complete eye-opener for many. It changed the way many academic economists think about the concept of rationality (although it hasn't yet persuaded the majority, I'm afraid), and it performed the ultimate merger between the fields of economics and psychology. 

I am proud to admit that after having read Kahneman's masterpiece, it has changed/shifted/(nudged?) my perspectives as well. Ever since first reading about it a few years ago (it was Ariely's Predictably Irrational and Caplan's Myth of the Rational Voter that got me started), I came to see myself attracted to the notion of abandoning the standard rationality assumption, even though, as a person with a standard academic training in mainstream economics, I was finding that rather difficult to do. I liked the mathematical precision of economic models, and was not overly enthusiastic about attempting to empirically test any of them. In hindsight, how silly of me! 

However, after having completed the classics, I admit that I too have become infected by this new and exciting field. I haven't changed my worldview after reading a few books. I'm still a political economist and an institutionalist, with public choice theory being the forefront of my research interests and efforts, but now I'm more inclined to give all of it it a behavioral flavor (after all, having clear institutional rules at the forefront of a democratic society coincide well with the "nudge", libertarian paternalism perspective - it's all about mechanism design anyway). 

So behavioral economics is definitely a research worth pursuing. At least it can serve as an eye opener to many economists still enslaved in formal modelling (as I was). Without further ado (we got a long read in front of us), let's start with Kahneman's classic.

Kahneman, Daniel (2011) Thinking, fast and slow. Allen Lane.

In what is arguably one of the best books ever written on the psychology of decision-making, Nobel prize winner Daniel Kahneman offers a slightly autobiographical insight into how our mind works and how we tend to be absorbed by heuristics which prevent us from being the rational decision maker (the homo economicus) the field of economics set out to define. Kahneman won the Nobel prize in economics in 2002 for laying the foundation of an entirely new field of research (behavioral economics), and he would certainly have shared it with his long-time friend and collaborator Amos Tversky who unfortunately died 6 years before the prize was awarded to Kahneman (who btw shared it with another brilliant scientist, the father of experimental economics, Vernon Smith). This book is therefore a detailed overview of all the work Kahenman and Tversky did in their career, from the time they met at the Hebrew University of Jerusalem back in 1969, to their time in Eugene, Oregon and Stanford’s Center for Advanced Study in the Behavioral Sciences. This includes a wide variety of topics from the workings of System 1 and System 2 (the basic idea behind the title itself, System 1 is fast thinking, System 2 is slow thinking), to heuristics and biases, to prospect theory.

Their arguably most important finding is how people deviate from rational behavior, even though most of the time their thinking (and decisions) are sound. So in general they are rational, but there is a number of scenarios where emotions such as fear or affection take over. They found systematic errors in the thinking of normal people, however they linked these errors to cognition rather than emotion (e.g. availability heuristic – people evaluate relative importance of issues by the ease they can be retrieved from memory which is largely determined by relative media coverage; explains why some issues become salient in the public discourse, and how it can easily lead to populism in politics - the so called availability cascade).

The reason why their research turned out to be so successful is because their simple examples showed people how they can fail in rational thinking (e.g. experienced academics, when they read a study result, tend to attribute judgment errors to the typical, average participant of the study – usually a student. It’s easy to distance yourself from the research findings this way; but when they themselves got caught up in the same biases, they started changing their minds).

The book is separated into several parts. The first one, concerning the workings of System 1 and System 2, sets the framework (and the tone) for the rest of the book where Kahneman dissects judgement heuristics and explores why is it difficult for us to think statistically, what makes us overconfident in our beliefs, how we make choices and what is prospect theory, challenges the assumption of economic rationality, and finally presents the distinction between the two selves of the human mind, the experiencing and the remembering self. 

So what are System 1 and System 2? Metaphors for intuitive, fast thinking (System 1) and slow, deliberate, and effortful thinking (System 2). System 1 operates automatically and quickly with little effort and without any voluntary control. E.g. recognizing a facial expression, recognizing objects, orienting attention, detecting distance, detecting hostility, read words on billboards, answering 2+2, driving a car on an empty road (given some experience), any type of automative work, understanding simple sentences, succumbing to stereotypes, etc. Everything that has to do with effortlessly maintaining impressions and feelings. Its mental actions are involuntary and it cannot be turned off. System 2 on the other hand allocates attention to an effortful mental activity that demands complex computation and deeper thinking. It is slower, deliberate, and orderly. It makes explicit and deliberate choices and decisions. E.g. focus attention to specific details, focus on the voice (and conversation) with one person in a crowded room, look for a specific person, search memory to identify a song, walk faster than usual, count letters in a text, tell someone your phone number, park in a narrow space, compare stuff, check the validity of an argument, etc.. It’s about paying attention, which is why it’s difficult (almost impossible) to do several System 2 things at the same time (you can’t do math while driving, but you can drive “normally” and talk to someone).

Because of this fact, that we can’t simultaneously perform two (not to mention more) System 2 tasks at the same time, we tend to be vulnerable to cognitive biases and heuristics (e.g. Invisible gorilla experiment). We tend to be blind to the obvious and even more so – blind to our blindness. The problem is that our System 1 generates suggestions for System 2, like impressions, intuitions and feelings. If these become endorsed by System 2 (without modification), they turn into beliefs, and subsequently voluntary actions. This is usually fine and perfectly normal, but it is also what’s causing our cognitive illusions. System 2 will only take over when things get too difficult or when an event happens that violates your model of the world (e.g. an accident, a terrorist attack, or seeing a Black Swan). The division is however important as it minimizes effort and optimizes performance. We use System 1 most of the time, because most of the time it works very well. BUT, System 1 is prone to biases and illusions which easily turn into systematic errors of judgment that are applied and stored by System 2. 

From this discrepancy our heuristics arise (NOTE: heuristics are simple, mental rules people use to make judgments). We fail to account for basic statistical laws and patterns (like regression to the mean) and neglect statistical base-rate information altogether, we are overconfident in our decisions (90% of us believe they are better drivers than the average), unable to understand the difference between correlation and causality (illusion of causality - having the need to find causal connections where there really aren't any), fall victim to confirmation bias (deliberative search for confirming evidence), fall victims to framing effects (90% fat-free is the same as 10% fat), fall victim to availability heuristics (buying insurance after an earthquake), anchoring, representatives, hindsight bias (the - I knew it all along effect), illusion of validity (we cannot predict, but this doesn't prevent us from doing so), entrepreneurial delusions (even though only 35% of start-ups survive the first five years, almost 90% of entrepreneurs don't think this will happen to them - this is actually a good one, as over-excessive risk-taking among start-ups drives innovation), etc, etc. The examples are so many. 

Basically, the conflict between the two Systems in our brain results in our blindness to uncertainty (recall Taleb), and our blindness to the obvious. After having read this book it becomes much clearer as to why certain people tend to believe that, for example, vaccinations cause autism, or that the earth is flat (yes, this is an actual "debate", believe it or not). People really don't make rational decisions even if they are "smart". Intelligence, actually, has nothing to do with it. When performing automatic tasks we are guided by our intuitions, not by our reason. But what we can improve in our thinking is to know when, in the decision-making process, is it necessary to neglect the biases generated by System 1, and operate only on System 2.

It's really an insightful book, and I fully recommend it. As you walk through the examples you barely notice its length. Maybe because you start to think that everything he writes about is important! Is that a heuristic as well? Did Kahneman exploit our System 1 during the reading process? Perhaps he did. Perhaps he did...

Thaler, Richard and Sunstein, Cass (2008) Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

The next step after Kahneman was surely this book, containing the very essence of behavioral economics, its application to policy and, as the title says, our "health, wealth and happiness". The authors, Richard Thaler, behavioral economist, and Cass Sunstein, legal scholar and expert in constitutional law (both from University of Chicago at time of writing; Sunstein is now at Harvard), also start with abandoning the general economic assumptions behind human rationality (distinguishing between the perfect Econs and the imperfect Humans) and utilize some of the very same examples as in Kahneman's classic (although Nudge came out three years earlier. Nevertheless, the examples they all use are common knowledge in psychology and are very widespread when teaching this stuff to people). Instead of System 1 and System 2 they use the more formal Automatic System (i.e. "1") and Reflective System (i.e. "2"). They too draw attention to the same biases arising from the conflict between the two systems, like anchoring, availability heuristic, representatives heuristic, status quo bias, heard behavior, etc. 

But their basic underlying idea is policy-oriented. In the book they define and introduce a new policy-making concept - libertarian paternalism. Some might call it an oxymoron, but the authors are adamant that it isn't. They reject the traditional coercive nature of paternalism, and embrace its choice architecture approach in trying to help people make better decisions. The libertarian part refers to having the freedom to choose among any choice that you wish (so it's based on our voluntary decisions, there's no forcing anyone to do anything). Combining the two implies nudging a person towards one choice, even though that person has the full freedom not to accept it and go for something else. What they're exploiting in this concept is human inertia - the unwillingness to be detached from the default. For example, the decision to become an organ donor after an accident. Making this the default option significantly increases the amount of people who agree to become donors. Asking the people to tick a box in order to become a donor is not the same as asking them to tick a box to remove them from the potential donor list. The same thing is with savings (for retirement) decisions - having the default option a given savings rate (say 6%) which can be changed and abandoned if anyone wants is not the same as asking the people to choose their own savings plan. In the first case someone (the employer) has made the choice for you, which you are free to alter, while in the second you have to choose for yourself from scratch, which results in the fact that most people simply choose not to. For example, the opt-in scheme (where you have to tick a box to enroll) has savings participation rates around 60%, whereas the identical opt-out (where you have to tick a box to withdraw) has participation rates between 90 and 95%. A huge and important difference. Automatic enrollment solves the inertia problem (an example exploiting inertia are magazine subscriptions - they get you to sign up for the discount version, and after it ends, the default option is usually to extend the subscription under the normal price - most people simply forget to cancel it). 

In essence their idea is to become the next hit book for policy-makers, an effort they've succeeded at (read the next review). They present a multitude of examples on how to improve policy-making, how to help achieve better outcomes in public health (e.g. how we choose our food in the cafeteria), environment, schools, marriage, finance, retirement plans, reducing energy use (putting a smiley face on bills goes a long way, believe it or not), and even improve the precision of men to pee in public urinals (give us a urinal target, seriously - it can have politicians' faces on it). 

Upon careful inspection, some policy proposals they advocate are better suited as being described as soft paternalism, since the libertarian perspective puts the free to choose argument in its core. Personally, I don't feel the idea is too threatening towards free will, since it merely solve the laziness problem. Even if we know what's good for us, we can be too lazy to achieve it. Their examples and proposals are seemingly trivial, but can actually improve outcomes. Having a smiley or frowny face on your utility bills doesn't hurt anyone, nor does it impede on free will. It just helps us put things in perspective. It lowers our informational asymmetry. And that is always a good thing. 

Thaler, Richard (2015) Misbehaving. The Making of Behavioural Economics. Allen Lane, UK

After having read the first two books, Thaler's latest volume on how the field of behavioral economics originated and catapulted itself into the mainstream paradigm was a must. Although many of the stories are the same as in the first two books (particularly in Nudge, obviously), the book is still a refreshing and easy read. However I do suggest that any reader anxious to learning more about the field of behavioral economics takes a few months between reading each of these - you'll be reminded of the major concepts once again and yet some of the the examples will sound new. 

Misbehaving is an autobiographical book about behavioral economics and one of its key founders Richard Thaler. Thaler paints an encompassing picture of how this once narrow field, whose advocates were treated as renegades among mainstream economists at the time (particularly in finance), came not only to the fore of mainstream economics, but is also threatening to overtake it completely. He describes in detail all the wrong assumptions economists make (e.g. perfectly rational agents which are not overconfident, nor are they befuddled with biases, and can solve optimization problems easily) featured with a multitude of examples (some new, some from Nudge - but mostly new and interesting). Mental accounting and self-control seem to be the two things economists stay blind to. For example, people being responsive to bargains and rip-offs (raising the price of shovels after a snowstorm may be good economics, but it's certainly bad for business, at least in the medium run), or the acknowledgment of sunk costs (which is economically unsound - paying money for an event we end up not going does hurt us), or the fact that we find it hard to contain ourselves from eating a bowl of cashew nuts standing in front of us while waiting for dinner. And so on (there's even an example of a Prisoner's Dilemma game from the show Golden Balls, an example I always use in my class when introducing this concept - watch it, the title is "the weirdest split or steal ever".)

Thaler is, above all, a great storyteller. Even more amusing than Kahneman, he writes with a distinguishable wit, self-criticism, and a sense for entertainment. From what I read in the three books, I understand he's like that as person as well: humorous, observant and intelligent (and lazy apparently - as Kahneman reportedly said, "he would only work on questions that are intriguing enough to overcome the default tendency to avoid work"). So a typical Human, rather than a perfectly rational Econ, as he himself would put it. 

In the book he takes us through a journey of how behavioral economics, himself in the center of it all along with the dynamic duo Kahneman-Tversky, came to existence, how it survived the harsh academic debates with mainstream economists, and eventually triumphed and started to be used as a policy-making tool around the world. The Nudge Unit in the UK (officially Behavioural Insights Team) set up under Cameron's administration, and the White House Social and Behavioral Science Team set up under the Obama administration, are the two most famous examples, but as Thaler informs us, over 50 countries in the world have set up similar teams designated at helping design better policies aimed at improving our well-being. 

The most interesting thing for me, apart from the stories, experiments, and anecdotes, were all the famous economists he interacted with during the creation of the field. Both as adversaries and proponents, and enemies and enemies turned advocates. He mentions the creme de la creme of the economics profession of the 1970s, 80s, 90s. From the Chicago boys Eugene Fama, Merton Miller, Robert Lucas (all Nobel laureates, N.L. henceforth), and judge Richard Posner, to Robert Shiller (N.L.), Vernon Smith (N.L.), Charles Plott, Alvin Roth (N.L.), Kenneth Arrow (N.L.), Robert Barro, Tomas Scheling (N.L.), Andrei Shleifer, Michael Jensen, Orley Ashenfelter, etc. (to name only a few) (btw, you can guess which of these were his friends based on which of them he refers to by their first name at some point in the book). Having to confront and persuade (future or current) Nobel laureates in the field had to have been an exciting experience.

Anyway, the success of the field, from its infancy during the long working hours in Stanford in early 1970s, to its profound impact on policy in the current decade, is impressive to say the least. I don't recall any scientific field developing so quickly in terms of having an immediate impact on "the real world". And particularly a field that went against the dominant assumptions of its science at the time (the Samuelson mathematical economics revolution of the 50s, the origination of the efficient market hypothesis in the 60s, and the emergence of new classical macroeconomics in the 70s and the 80s, to mention only a few obstacles). To withheld such powerful opposition means that behavioral econ really is that good. All it takes for it now is to penetrate the principles of economics textbooks (a bold endeavor), and to finally improve decision-making in macroeconomics (something that Thaler predicts/hopes to be the next step). This will be its most difficult task. Unlike health or education economics, or savings and taxation, or finance for that matter, fiscal and monetary policy are difficult to test, and, as Thaler correctly pointed out, its theories/hypotheses are difficult to falsify (in many cases since they appear too vague, plus there's not enough data to begin with). If behavioral economics does to macro what it did to finance, it will be its ultimate triumph. 

Addendum 

These three books are just the beginning, and are nowhere near the exhaustive list of must reads on behavioral economics. In addition to a multitude of papers (Kahneman reprints two of his papers co-written with Tversky at the end of Thinking, fast and slow), there are other books I would like to recommend: 

  • Kahneman and Tversky (eds): Choices, Values and Frames
  • Kahneman, Slovic, and Tversky (eds): Judgement under Uncertainty: Heuristics and Biases
  • Sunstein, Cass: Why Nudge. The Politics of Libertarian Paternalism (his own perspective, I assume similar to Thaler's Misbehaving)
  • Halpern, David: Inside the Nudge Unit: How small changes can make a big difference (the Nudge Unit CEO examining their policy outcomes)
  • Ariely, Dan: Predictably Irrational
  • Ariely, Dan: The Upside of Irrationality
  • Samson, Alain (ed) The Behavioral Economics Guide 2014 (pdf)

Gladwell's and Taleb's books are also somewhere in that category. Many others exist as well. Have fun! 

Thursday, 10 March 2016

Explaining regression to the mean in football

I'm a football fan (European football of course). In addition to my family, my work, my interests in economics, politics, cooking, and a host of other stuff, I enjoy football. Watching, playing and reading about it. It's a complete put-my-mind-at-rest kind of thing. So yesterday before the Chelsea-PSG game (I'm a fan of neither clubs actually), I read this article on Eurosport entitled: "Chelsea’s strange success under interim managers – here’s why they CAN win the Champions League".

I'm not here to pick on them for their terrible prediction. Bad predictions are normal in sport sections. No, I was more infuriated with the tone of the article and the apparent incredible success of Chelsea's interim managers who, after the team was performing badly in the beginning of the season, managed to pick the team up, and used all their wit and expertise to turn things around and even win trophies (Di Matteo won the Champions League, Benitez the Europa League, Hiddink the FA Cup, and Grant came close to clinching both the Premiership and the Champions League). The explanations for these successes were oversimplific to say the least. But I don't want to attack the journalist directly, nor sports journalism in general, it's not their fault. Most people have convoluted theories about other peoples' (and their own) success. And nowhere is that more noticeable than in sports. Every fan-based psychological bias that you can think of you can find in the world of sports. Any sports, particularly the most popular ones. There is such a huge distinction between the actual experts in sports (the athletes themselves, the really good managers, a host of people working for the athletes/clubs) and the general public - the fans. But this doesn't prevent the fans from playing pundits. In fact, sport-viewing seems to encourage punditry. Just pay attention in your local pub when watching the next sports game - everyone in that pub seems to know better than the losing team's manager. Everyone. Including me :)

Regression to the mean 

Anyway, there is a lot I can write about each and every fan-based bias I can think of, but I'll only focus on the two things seriously wrong with the arguments laid out in that article (and in sports conversations in general). The first is the lack of elementary statistical knowledge in recognizing one of the most central statistical findings - regression to the mean. 

What is regression to the mean? In simple terms this is a when a series of over-performance is followed by a decline in performance, or vice-versa, when a series of under-performance is followed with an upsurge in performance. So if you notice extremes about a certain observed action at one point in time expect the next set of measurements to move it towards the average. Of course this is mostly applicable to situations that have a lot to do with luck (e.g. rolling a dice - at one point the average will regress towards 3.5 - without exception). Yes, skills will be important for different occupations, but in some more than in others luck is often key. Sports is the best example. The stock market is another one. Of course, the professionals in each of these two fields will tell you how luck has little to do with their success, but they tend to be disillusioned and overconfident about their own performance most of the time anyway. 

I actually have an introductory econ lecture where I use football examples to teach the students some basic statistics. Here are the two slides I show them for this topic (both taken from a great book "The Numbers Game" by Chris Anderson and David Sally):

Data from ter Weel, B. (2006), "Does Manager Turnover Improve
Firm Performance?
, adapted from Anderson & Sally (2014):
"The Numbers Game"
What you see here is data on managerial sackings and before and after club performance, plotted from a research by Bas ter Well on the Dutch first division, Eredivisie, on an 18 year sample (read the news story about the research here). In the graphs time t is time of sacking; t - n (n=1...5) is performance before the sack; t + n is performance after the sack. On the y-axis relative performance measures how the club was doing relative to the games before the first data point (t - 5). So when a team drops from 100% to 50% in relative performance, this means that if they got, say, 20 out of 30 points earlier in the season (66.6% success rate), now they only got 5 out of the possible 15 points (33.3% success rate) in the next 5 games (just before the sacking). So their performance deteriorated rapidly. 

When I show the students the first slide, they immediately jump to conclusions about how this is proof that owners should fire managers - performance is much better after the sacking. But that conclusion is wrong. 

The best way to prove it is to look at similar teams that had a poor run of form, but decided to stick to their manager instead of firing him (think of it as a control vs treatment group). This is shown in the second graph, where the red line indicates the control group - similar teams that kept their managers. Their performance after the drop in form picks up and it tends to be even better after the next 5 games with the old manager in charge. 

So yes, the new manager did bring something into the team. Perhaps he brought back some confidence, but in the end the team simply regressed to the mean. Going back to Chelsea, every one of their teams taken by interim managers was a very good team. Every one of them had stellar players like Drogba, Lampard, Terry (the backbones of Chelsea for eight seasons), in addition to a number of other stars. These players don't just forget how to play football. They experience a set of poor performances after which it is expected, due to their quality, for them to bounce back and start winning again. As they are currently doing in the second half of this season (well, apart from last night). Remember, even Barcelona has a spell of poor performances every now and then when you compare all of their seasons put together (last season they started roughly, before eventually winning the treble). 

To show you how over-performance and regression to the mean can last for whole seasons, think of Liverpool during Brendan Rodgers' three seasons in charge. First season - poor; 7th place (even with Luis Suarez scoring 30 goals), second season - brilliant; 2nd place and a run-in for the Premiership until the bitter end (again Suarez scored 31 goals, but with great performances from the entire team including the talisman captain Steven Gerard - oh God, I'm starting to sound like a sports writer, am I?), and the third season again poor; 6th place (albeit without Suarez in the team). So Liverpool were extraordinary under Rodgers only for one season (and even in that season it was actually the second part of it when they really excelled). Liverpool under Rodgers simply regressed to the mean - which was ok, but not particularly good either. The new manager Klopp is faced with having to lead the same team in their regression to the mean for yet another season. Expect somewhere between 5th and 7th place (this is unfortunate for me to accept as a Liverpool fan - but as a fan I may at least hope and dream).  

Hot streaks 

Regression to the mean also tells us why there is no such thing as a hot streak in sports. Many will not agree and cite momentum and team spirit in light of great performances. All that is true, no doubt about it, but in the end, when you look over a thousand games, of course you're gonna observe the team winning 30 or 50 times in a row. Imagine throwing a coin a thousand times. You know the end distribution will be around 50-50. Perhaps 502 heads and 498 tails, but it will be close to 50-50. Is it really impossible to image that in one point you throw 50 heads in a row? Try it. If you have time, throw a coin one thousand times, and do it several times across a few days. Report the odd results and be amazed. 

Of course, good teams amass to more wins than losses over those thousand games (just ask Alex Ferguson), but even they ride on luck a big portion of the time. This is not to say skills aren't important, along with all the other stuff, but luck carries a great weight in sports. After all, that's what makes sports unpredictable and that's precisely why we like it! 

Historicism and selection bias

The second thing about the article (and sports conversations in general) is more related to its conclusions and implications. If all previous interim managers were that good, Hiddink will (again) surely follow in their footsteps and achieve success this year for the club. This is historicism at its worse. Using a few historical anecdotes to draw powerful predictions about the future is naive at best, dangerous at worse. In an occupation like football (sports in general) where successes and failures have A LOT to do with luck (yes, skills are important, money is important, training and game strategies are important, but 50% of it is just pure luck - read all about it here), history is completely irrelevant about inferring future performance. For example, what were the expectations about Leicester City's current season performance based on their last season performance? Note: Leicester was fighting for survival last season, while they are top of the table this season with a good chance to win the Premier League! Many pundits expected them to be relegated this season, based purely on observing their past performance. At the same time many of the same pundits expected Chelsea to dominate the Premier League for another season after they were champions last year. But they didn't. Past performance is meaningless in football. 

The same goes for 80-year long curses and traditions that apparently are bestowed upon some clubs or players. Or why one club simply cannot win an away game at a certain stadium - it must be cursed! Until of course the club wins there. And surprise, surprise, the curse is broken. 

The only thing more silly in the world of football that I can think of is the so-called Ramsey curse - every time Aaron Ramsey, an Arsenal midfielder, scores a goal for his club, a famous person dies. Not only is this ridiculous in so many levels, it is also a serious example of selection bias. Famous people die all the time, regardless of whether Ramsey scores goals or not. In other words, how many times did a famous person die when Ramsey DIDN'T score a goal? 

Ramsey's infamous record...Who's next?
To paint an even clearer picture about the Ramsey curse, here's an anecdote from Taleb's Black Swan about Cicero (it was Cicero telling the story about another person, but I'll just use Cicero as the main protagonist). They've shown Cicero a painting of soldiers surviving a shipwreck and praying to the gods (presumably Neptune, the god of sea) to save them. They all survived, and this was supposed to be interesting evidence about how a prayer can save your life. Cicero's response was: "where are the paintings of those who prayed and then drowned?"

And I'll leave you with that.