Thursday, 27 October 2016

Predicting the 2016 US Presidential election

Is it possible to have a more accurate prediction by asking people how confident they are that their preferred choice will win?

One consequence of this hectic election season has been that people have stopped trusting the polls as much as they did before. Which is surprising given that in the US, unlike the rest of Europe, pollsters and particularly polling aggregation sites (like FiveThirtyEight) have on aggregate been quite accurate in their predictions thus far. Still, one cannot escape the overall feeling that pollsters are losing their reputation, as they are often being accused of complacency, sampling errors, and even deliberate manipulations.

There are legitimate reasons for this however. With the rise of online polls, proper sampling can be extremely difficult. Online polls are based on self-selection of the respondents, making them non-random and hence biased towards a particular voter group (young, better educated, urban population, etc.), despite the efforts of those behind these polls to adjust them for various socio-demographic biases. On the other hand, the potential sample for traditional telephone (live interview) polls is in sharp decline, making them less and less reliable. Telephone interviews are usually done during the day biasing the results towards stay-at-home moms, retirees, and the unemployed, while most people, for some reason, do not respond to mobile phone surveys as eagerly as they once did to landline surveys. With all this uncertainty it is hard to gauge which poll(ster) should we trust and to judge the quality of different prediction methods.

However, what if the answer to ‘what is the best prediction method’ lies in asking people not only who they will vote for, but also who they think will win (as ‘citizen forecasters’) and more importantly, how they feel about who other people think will win? Sounds convoluted? It is actually quite simple.

There are a number of scientific methods out there that aim to uncover how people form opinions and make choices. Elections are just one of the many choices people make. When deciding who to vote for, people usually succumb to their standard ideological or otherwise embedded preferences. However, they also carry an internal signal which tells them how much chance their preferred choice has. In other words, they think about how other people will vote. This is why people tend to vote strategically and do not always pick their first choice, but opt for the second or third, only to prevent their least preferred option from winning.

When pollsters make surveys they are only interested in figuring out the present state of the people’s ideological preferences. They have no idea on why someone made the choice they made. And if the polling results are close, the standard saying is: “the undecided will decide the election”. What if we could figure out how the undecided will vote, even if we do not know their ideological preferences?

One such method, focused on uncovering how people think about elections, is the Bayesian Adjusted Social Network (BASON) Survey. The BASON method is first and foremost an Internet poll. It uses the social networks between friends on Facebook and followers and followees on Twitter to conduct a survey among them. The survey asks the participants to express: 1) their vote preference (e.g. Trump or Clinton); 2) how much do they think their preferred candidate will get (in percentages); and 3) how they think other people will estimate that Trump or Clinton will get.

BASON Survey for the 2016 US Presidential elections
(temporary results for states in which predictions have been made by our users)
Let’s clarify the logic behind this. Each individual holds some prior knowledge as to what he or she thinks the final outcome will be. This knowledge can be based on current polls, or drawn from the information held by their friends and people they find more informed about politics. Based on this it is possible to draw upon the wisdom of crowds where one searches for informed individuals thus bypassing the necessity of having to compile a representative sample.

However, what if the crowd is systematically biased? For example, many in the UK believed that the 2015 election would yield a hung parliament. In other words, information from the polls is creating a distorted perception of reality which is returned back to the crowd biasing their internal perception. To overcome this, we need to see how much individuals within the crowd are diverging from the opinion polls, but also from their internal networks of friends.

Depending on how well they estimate the prediction possibilities of their preferred choices (compared to what the polls are saying), the BASON formulates their predictive power and gives a higher weight to the better predictors. For example, if the polls are predicting a 52%-48% outcome in a given state, a person estimating that one candidate will get, say, 90% is given an insignificant weight. Group predictions can be completely wrong of course, as closed groups tend to suffer from confirmation bias. On the aggregate however, there is a way to get the most out of people’s individual opinions, no matter how internally biased they are. The Internet makes all of them easily accessible for these kinds of experiments, even if the sampling is non-random. 

Oraclum is currently conducting the survey across the United States. Forecasts are updated daily with the final one being shown on Election Day. 

So if you think you know politics, and that you do not live in a bubble where everyone around you thinks the same way, log into our app through Facebook or Twitter, give your prediction, and attain bragging rights among your friends on November 8th. Don’t forget to share and remember: if it’s not on Facebook or Twitter, it didn’t happen!

Thursday, 20 October 2016

The trade-off between equality and efficiency reexamined

After having read and reviewed Stiglitz's book earlier this week, and after having written the following paragraph...
"I too have long considered the relationship between equality and efficiency to be non-linear, instead of just a simple trade-off. Too much equality isn’t good since it reduces incentives, but neither is too much inequality. I would say the relationship is of an inverted-U type where moving to both extremes – too much and too little equality is bad for the economy. The trick is to find an optimal point which reduces the level of inequality where it offers more opportunities for everyone, but also just enough for it to continue to drive incentives. More on that in my next blog post."
...I just had to dig deeper into the whole equality-efficiency trade-off. So I picked up a seminal book from a man who specialized in economic trade-offs, none other than - Arthur Okun! Okun is more famous for his "law" stipulating the linear relationship (read: trade-off) between GDP and unemployment, where every 1% increase in the rate of unemployment corresponds to a 2% decline of GDP. But today I will not be examining this supposed relationship from the 60s, but a more contemporary one (proposed in the 1970s), claiming that there is a similar linear relationship between equality and efficiency, all summarized in the following book:

Okun, Arthur (1975) Equality and Efficiency. The Big Tradeoff. Brookings Institution, Washington, DC. (this would now be vol. 12 of the What I've been reading section)

Okun's book, published in 1975, testifies of this relationship where greater economic equality necessarily to some extent implies lower efficiency of the economy. In other words, lowering inequality comes at a cost of lowering efficiency. He develops a very interesting argument in which he acknowledges this trade-off, but also proposes a set of policy interventions that would increase both efficiency and equality – such as policies aimed at attacking inequality of opportunity, like racial and sexual discrimination in the workplace (which were arguably even greater back in the 1970s than today) and barriers of access to capital. So in a way even though he implies a linear relationship between equality and efficiency, where one is necessarily sacrificed in terms of another, he clearly sees that when inequality is too high, it can also act as an impediment on efficiency. Okun emphasizes on several occasions that he is a stern believer in the market system, but also that some rights (like the right to vote) should not be bought and sold for money. In other words, he believes in the enormous efficiency of the market system (he devotes an entire chapter emphasizing the benefits of the “mixed” economy model vs the socialist economic model), but is also concerned with the moral implications of why some of our basic human rights cannot have a price tag attached to them. The reason why is very eloquently summarized in following sentences: “Everyone but an economist knows without asking why money shouldn’t buy some things. But an economist has to ask that question”. Hence the first chapter.

It is in this book that he also uses his famous “leaky bucket” metaphor to emphasize the inequality-efficiency trade-off. Here’s a brief explanation: say you want to tax the richest families for a certain amount of money (e.g. $4000 per family) and then redistribute this money to the poor so that each poor family gets $1000 (the ratio of poor to rich is assumed to be 4:1). Now imagine you are carrying all this money you took from the rich in a leaky bucket, so that each poor family will necessarily receive less than a $1000. What’s the cutoff value of money the poor would receive for you to consider the transfer efficient? There is basically no wrong answer here – it depends on your preferences for redistribution. Some people would accept 10 or 20%, some 60% (like the author), some almost 99%. The point that the leaky bucket experiment is trying to make is that each redistributive action will necessarily come at some cost in efficiency. But we as a society must accept this in order to lower economic deprivation that not only hurts the economy, but it can also infringe on our principles of democracy.

Okun devotes a considerable amount of attention to the problem of too much power in the hands of certain interest groups and how they might use it to bias the budget (and much more) in their direction. He cites oil producers, farmers, teachers, union workers, gun lobbies, you name it. Specifying the intensity of their preferences through money is a perfectly legitimate manifestation of their democratic right to fight for their interests. However by doing so they necessarily channel public resources to the hands of the few, at the expense of an unorganized majority which lacks enough interest to engage (just as Mancur Olson taught us).

What fascinates me is that this discussion seems so contemporary, yet Okun wrote it back in 1975! Furthermore, he lays out other facts about 1975, where he complains about the “unacceptably” high levels of wealth and income inequality: “The richest 1 percent of American families have about one-third of the wealth, while they receive about 6 percent of after-tax income.” Today that figure is much higher – it is about 18% of total income. In the books on inequality I’ve read so far, the 1970s were the golden age! But according to Okun, it was still too high. Even in the decade when top income tax rates were 75%, America still had the inequality problem.

This can only confirm Okun’s hypothesis that the US has always sacrificed equality for efficiency. Inequality in the US has been and probably always will be higher than in Europe – but that is precisely because of the innovation-driven, trial-and-error, cut-throat capitalism of the US versus the welfare-state, cuddly capitalism of Europe. And that's fine. But the fact is that inequality in neither of these has to be this high. Hence the final chapter where he proposes a set of standard policy measures (some of them quite good, focusing equality of opportunity) designed to combat the “alarmingly” high inequality of the 1970s (sic!), without sacrificing efficiency. 

Building up on Okun: The trade-off reexamined

Following in that direction, I consider the given relationship to be an inverted U-shaped curve, with higher levels of inequality corresponding to lower levels of efficiency (and hence GDP/income per capita growth), and vice versa - too much equality implies a lack of incentives for the people to create wealth. In other words there will (and should) always be some acceptable level of inequality, which in itself is not necessarily bad given that it is combined with high social mobility. However if the levels of inequality are too high they will negatively impact economic growth. The goal is then to find a balance of lower inequality combined with high social mobility, in order to maximize economic efficiency, i.e. to maximize the productive capabilities of the economy. In other words, there is no linear trade-off between equality and efficiency - there is a need to strike a balance between them. I summarize it in the graph below:

We start from the bottom-left corner with the Gini index at its theoretical 0 level, implying perfect equality (each person having the same income). Clearly for that level of equality efficiency (measured as either total factor productivity (TFP) or GDP p/c growth) is also around 0, since no one has any incentives to produce and to innovate given that all rewards are equal. Even at slightly lower levels of equality (after introducing some inequality), efficiency does not increase, assuming that it takes time for agents to pick up the signal that there is now a possibility to work more in order to get more. Then as inequality stats to increase, the level of economic efficiency increases even more as the relationship becomes reinforcing - more people see that their innovation, talent or extra effort will be significantly rewarded so they expand their activities which creates upward pressure on both inequality and efficiency. Until it reaches a point of maximum economic efficiency for a given level of inequality. As I've pointed out in the graph this is not necessarily at the Gini=0.3, it could be either higher or lower than that - this needs to be verified empirically. After that global maximum of the curve, the relationship turns negative - more inequality beyond the efficiency-maximizing level slowly but steadily decreases economic efficiency until society descends into close to perfect inequality (a Gini=1 means one person has all the income), where again there are no incentives to produce, innovate or create new value, given that all of this new wealth will just fall into the hands of the selected few (like in a stationary bandit dictatorship).

The question to ask is why does this relationship between efficiency and inequality suddenly turn from positive to negative? Which are the forces at work that turn inequality not only into a social, but also an economic problem for society, in a sense that greater wealth accumulation into the hands of fewer and fewer individuals undermines productivity and the desire to innovate? The answer is exactly that - as more and more people start realizing that the value they produce is, within a crony system, ending up in the hands of the few, rather than being distributed among the many, their productivity will necessarily decline. It is exactly like living in a communist dictatorship. Most people rationally choose not to innovate because they realize that any wealth they create will be extracted by the state. So a communist dictatorship will always, ironically, resemble a society with high levels of inequality, given that the elite around the dictator will hold not only full political power, but also a vast majority of economic power (if you want examples just take a look at this list to see which kinds of countries score highest in their Gini levels). 

Now, I've deliberately put the US on the right side of the curve suggesting that it is currently beyond the peak of an efficient level of inequality, and that it certainly does have room to lower its current high inequality which would not hurt its economic efficiency. On the contrary - it would most likely improve it. Remember that the total factor productivity in the US has been in a stage of relative stagnation since the 1970s, which I think can be explained by the simultaneous origination of the Third Industrial Revolution and the technological progress that has lowered productivity and kept low and middle-class wages relatively stagnant. Combined with globalization and a host of other factors (read about all of them here) all of them have affected the rise of inequality combined with a decrease of efficiency. An experienced researcher is likely to conclude that perhaps there is an omitted variable bias in this story, meaning that there is one common factor that is affecting both the rise in inequality and the decline of efficiency - technological growth is the perfect example. I agree, the relationship is far from proven to be a causal one. Nevertheless, some levels of income inequality are obviously bad for growth. If the majority of the population is experiencing declining living standards this affects their purchasing power and their consumer choices, which on the other hand puts a lot of businesses in danger of having declining sales. A consumerist society is only efficient if the people can get a decent salary for a decent job. The prosperous cycle is an amazing thing, but it needs to be in motion. If it stops or it slows down (and we can actually measure this by an indicator called the velocity of money, which is dancing at historical lows right now!) then the economy is likely to undergo a period of prolonged stagnation. 

Finally, given that my graph above is a mere theoretical construct, one should really consult the actual data to see whether or not it holds. I intend to do just that in the next few years.

Monday, 17 October 2016

What I've been reading (vol. 11): Atkinson & Stiglitz

Atkinson, Anthony (2015) Inequality. What Can Be Done? Harvard University Press
Atkinson, Anthony (2008) The Changing Distribution of Earnings in OECD Countries. Oxford University Press

The first two books, both written by the same author, Oxford economics professor Sir Tony Atkinson, will be reviewed jointly. The reason is that the earlier book, The Changing Distribution of Earnings in OECD Countries is more a case study summary of the empirical facts behind the rise of inequality in the West in the past century, the point of which is again summarized in the first few chapters of the author’s latest book Inequality. Basically the earlier book is a very detailed portrayal of the worrying inequality trend in the case of 20 OECD economies. It has two main parts – the first which depicts both the theoretical arguments and the summary of the historical trends for all the given countries, and the second which (on over 200 pages) details all the data, the graphs and the individual explanations of the causes of inequality for each of the 20 observed countries. The second book is a popular version of the same argument intended for “the masses”, but in particular aimed at the policymakers. After setting the diagnosis, describing the historical trends, and the economics behind inequality, the author dives into a very ambitious task of setting out a series of 15 concrete policy proposals that countries (but mostly the UK) can apply in order to reduce income and wealth inequality. The final part of the book then discusses the potential objections – are the proposals shrinking the pie, can they be done, will globalization hinder their effect, and finally will they be sustainable within the budgetary restrictions? 

Let’s start with the general conclusions of the first book in order to ease into the policy proposals of the second book. The first book is not intended for the average layman; it’s more an empirical economist’s companion on the vast data on inequality. The true gems of the book for the average researcher are precisely the 200 pages of data and graphs on inequality for each of the 20 countries. It’s a wonderful dataset above all (accessible here), and Atkinson goes to great lengths to describe all the faults and the benefits of the dataset, emphasizing on several occasions how the data is not comparable across countries (and to some extent it is even difficult to compare it within countries given the different methodologies for income reporting).

But before descending into the data he summarizes the trends. And they are in most cases bad. It is the story most people worried about inequality are by now quite familiar with. It has followed a trajectory of a decline after WWII which continued during the 50s, 60s and the 70s, but since the 80s it began to rise reaching its current unprecedented levels – well, at least that’s how the story goes for the US and the UK, Portugal, and the three transitional economies he has in the sample – Hungary, Czech Republic, and Poland, whereas the rest of Western Europe did not see such a high divergence between top and bottom income levels (the main comparison are the income levels of the top 10% vs the bottom 10% of income earners). After presenting these trends Atkinson delivers his own critique of the economic textbook model of inequality, in that technology and varying skills (attained through education) are not enough of an explanation. He adds to them an important role of the capital market (think of interest rates on student loans) but also explains in a bit more detail the so-called superstar model (people with extremely scarce skill for which there is a huge demand) and pay norms. 

In his more recent book, Inequality: What Can Be Done?, he does go a bit deeper into examining some of the causes of the rise in inequality. And he does it by first examining the factors that have lowered inequality in the post-WWII period, only to see most of them reversed in the 1980s making inequality rise again. One of biggest reasons in the post-war decline of inequality was the rise in the share of wages in total national income (reaching 80% of national income in the 1970s in the US and UK). In the subsequent decades the share of capital in income increased whereas the share of wages decreased. Furthermore the welfare state expanded significantly in the post-war era, as did redistributive social transfers. Unemployment was much lower, wage dispersion was reduced as the consequence of collective bargaining and government intervention in the labour market, while the concentration of capital income among the top income earners was in decline. Let’s also not forget the much higher income taxes on top incomes from that period. All of this basically went into reverse since the 1980s (on average higher unemployment, fall in share of wages and rise in share of capital in total income, declining power of unions, a cut in top income taxes and a scaling back of the welfare state). To this I would still add the forces of globalization (unskilled workers are mostly losers from trade) and technological change, as well as the effect of the capital market, changing pay norms and superstars (increasing demand for global talent). 

Although Atkinson here provides a very good overview of the facts, he gets carried away from time to time. For example, I didn’t buy into the argument that high income taxes in the period from 1950 to 1979 (when they averaged 75%!) were necessarily the reason why economic growth was high in the 50s and the 60s. The main reason as to why growth was high in those decades was probably the post-war reconstruction (the broken window hypothesis), plus the post-war baby boom which certainly encouraged greater consumption and hence significantly increased economic growth. When these two forces halted relative stagnation ensured in the 1970s. High taxes had absolutely nothing to do with growth at the time; if anything they could have contributed to the 1970s stagnation, since their relative decline in the 1980s did bring back growth. However, none of this has been empirically verified by anyone to my knowledge.

Let’s move to the proposals. They are exactly what one would imagine in a book like this – popular, to some extent utopist, very bold, to some extent controversial (the author even acknowledges this several times, but stresses that such bold proposals are necessary to make the public discourse move in the “right” direction). What disappoints me is that there is a lack of convincing evidence of the full effect some of these proposals will entail. There seems to be a focus only on one aspect of the story – reversing the factors that caused inequality to rise, without considering how this would affect other aspects of modern societies. Plus some of the ideas seem to me more like back of the envelope calculations than seriously thought-through proposals. Don’t get me wrong, I am in favor of reducing inequality, but a set of “concrete” proposals has to be more precise in how exactly would it affect inequality, and how exactly would it affect economic growth. There are three chapters in the back that are supposed to answer part of this (e.g. there is a simplified analysis of how some proposals would fit into the budget), but its not convincing. Other proposals don’t even qualify given that they are too vague. I understand this is a popular version and that the author clearly decided to avoid too much numbers and complexities not to bore the average reader. But to me, in order to even consider many of the proposals, I require much more details, first and foremost to give an answer how exactly the author thinks many of the proposals could be done (assuming even that they pass through the bi-partisan parliamentary process). Again, I fully understand his reluctance to go into greater detail, plus I understand he wanted to stir the dialogue, rather than to come up with a White Book of ready-made reforms. 

Having said all that, some of the proposals are actually quite good and even easy to implement. Like the capital endowment fund for children (minimum inheritance), the national savings bond to encourage savings, the child benefit to be extended to all children (and taxed as income for parents with higher incomes), the participation income proposal (similar to basic income, but different in terms of who has the right to get it, and how much), the earned income discount proposal, an interesting proposal for progressive lifetime capital recipient tax etc. However other proposals go directly against the empirical evidence the author himself cites. He does it on purpose (to create an effect of shock). E.g. he cites the paper by Brewer, Saez, and Shephard where they use a natural experiment setting to calculate the optimal top income tax rate (the one that would maximize revenue) at 40%. Atkinson however purposely calls for a top rate of 65%. Then there is the guaranteed public employment proposal (without at all calculating its effects), the code of practice for determining the living wage (to be determined by something closely resembling a central planning body), the proposal for property tax which disregards its potential effect on housing prices, etc. Overall, the proposals are certainly very interesting, however I would welcome a more detailed and more empirically justifiable argumentation for each of them.

Stiglitz, Joseph (2012) The Price of Inequality. WW Norton 

In what is essentially perceived to be a book about inequality, the Nobel-prize winner Joseph Stiglitz presents a very interesting portrayal of everything that went wrong in the US in the past few decades, in economic but also in political terms. Despite stable GDP growth, even in per capita terms, most citizens have not felt this growth and have actually witnessed their living standards decline. The US political system is failing as well; it is being captured by special interests which channel money towards the wealthy (via direct involvement in government redistribution but mostly via prone regulation and legislation), making America less and less a country of equal opportunity. In fact, the decline of social mobility, emphasized several times in the book, is an even bigger problem for the US. Both the poor and the rich are becoming entrenched in their positions in the income distribution, making it increasingly less likely for someone who is poor to climb the income ladder way to the top, and vice versait is very unlikely for someone who is rich to fall out of this category (educational opportunities play a key role here). All of this is without doubt true and very problematic as the US model of democracy is being shaken to its core. How can the US express its moral authority over others if it too has fallen into the trap of cronyism? Stiglitz however does not explicitly define all of the above as the consequence of cronyism or even interest group state capture; he likes to present it as a market failure, but also a political failure to deal with the market failure. 

For Stiglitz there is something broken in system; a system that seems to be designed to help those on the top at the expense of the rest of society. His main culprit here is politics, even if some of these forces are attributed to markets. The economic elites have used their money to buy political influence and hence political power in order to shape the system towards their benefit. This may sound a bit like conspiracy theory, but it has some merit. There are a multitude of examples of political capture by interest groups (via campaign contributions), but even more importantly regulatory capture (when an industry directly funds the politicians responsible for overseeing their practices), or even what Stiglitz defines as ‘cognitive capture’ – how only those who “agree” with the bankers are allow to write legislation and regulation that concerns the financial industry (this includes central bankers as well). 

An even bigger problem is how the wealthy (the top 1%) have acquired their wealth. Stiglitz attributes much of it to – rent seeking. An activity where one gets “income not as a reward for creating wealth but by grabbing a larger share of the wealth that would have otherwise have been produced without their effort.” And although this cannot be said of most of those in the 1% (I wrote about it a long time ago), it certainly does seem to be true for a considerable amount of those in the top. In the conference where I met Stiglitz, this was the main discussion, in fact – how many of those in the top around the world acquire their wealth through political connections (rent seeking) rather than via actual wealth creation. In other words the problem is that even though the pie is growing, a bigger and bigger amount of that pie is being captured by the rent seekers instead of the wealth generators. And that is indeed a huge problem. 

Another interesting part of the argument is the price we’re paying for inequality. Stiglitz goes beyond the trade-off between equality and efficiency stating that too much inequality is also bad for efficiency and bad for growth. He’s right, and it’s easy to see how – for one thing, having too many people left out of the benefits of economic growth derails their consumer spending power. I too have long considered the relationship between equality and efficiency to be non-linear, instead of just a simple trade-off. Too much equality isn’t good since it reduces incentives, but neither is too much inequality. I would say the relationship is of an inverted-U type where moving to both extremes – too much and too little equality is bad for the economy. The trick is to find an optimal point which reduces the level of inequality where it offers more opportunities for everyone, but also just enough for it to continue to drive incentives. More on that in my next blog post. 

Having said all this, there is still a feeling that the book lacks clarity in certain parts, as even within chapters it tends to jump from one argument to another. It gives the impression that it was either written too fast, that it was assembled quickly using some old writings, or even that certain chapters were written by someone else (several times he uses the “I” form, but in a few chapters there is also the “we” form, as in “our argument is…”). 

Plus there is often incoherency in the arguments. In the early chapters he claims that markets are not the principal driving force of the current status of the US, since all other countries are operating on the same market principals. His hypothesis is that the market forces are real but are being shaped by the political process that defines laws, regulations and institutions, all of which has some distribute consequences. Then in the closing few chapters he resorts to his usual bashing of the market forces and a cry for an omnipotent government to solve their failures. The biggest problem with this standard argument from the Left is that they expect the very same corrupt politicians, enslaved by their clientelistic relationship with powerful interest groups, be given even more power to supposedly fix the system. That’s why, in essence, redistribution is not the answer – given the issue of who is making the distributional decision. The answer is to change incentives (or “ideally” to elect someone like Stiglitz to lead us, right?). 

And then we come to the policy proposals; his grand economic reform agenda followed by a rather disappointing political reform agenda (disappointing given his excellent recognition of all the errors of the political system). Each is to be applied simultaneously in order to, well, create a better world, what else?! Reading these agendas (summarized on about 20 pages in the final chapter of the book) felt like reading a political program for a generic social-democratic candidate for president/prime minister. It was just a bunch of bullet points whose only purpose is to convey a positive message to the voters, rather than being a set of policies that can actually fix the system. Particularly when comparing them to Atkinson’s proposals, with whose details I also wasn’t really satisfied, but at least Atkinson made some effort into discussing the viability and applicability of each proposal. With Stiglitz, it all sounds just too cheap. For most of the proposals there isn’t a single sentence in how they are to be implemented, not to mention what the potential effects could be. Apart from the usual – if we implement all this we’re going to have a better society! It really is a mediocre political program. 

Now don’t get me wrong, some of the things he proposes in the book are quite good. I particularly agree with his criticism of “GDP fetishism”, given that GDP does not accurately reflect living standards, and particularly the sustainability of the economic growth model. Or the ideas to end corporate welfare (like hidden subsidies to big corporations and government giveaways in the form of procurement), to improve access to education, help Americans save (there is btw no indication as to how this is supposed to be done, but OK), expanding health insurance, improving the legal system, curb discrimination, etc. However others are just too much, or can be completely incoherent (like all of his proposals to curb the power of the financial sector – there is no indication as to how this was supposed to be done). Some are just breathtakingly shocking. For example I was bemused that a self-respecting economist can utter these words: “If exports create jobs, then imports destroy jobs; and we’ve been destroying more jobs than we’ve been creating” (pg. 279). Wow! A Nobel-prize winning economist succumbing to the most basic mercantilist fallacy by saying that imports destroy jobs is just unbelievable. You’re not running for President, Prof. Stiglitz, please do away with the conspiracies.  

However, I would still recommend the book even to those who usually disagree with Stiglitz, because if you can look past some of his standard ramblings, he really does deliver a decent analysis of some of the things that went wrong with the US political system.

Monday, 10 October 2016

2016 Nobel prize awarded to Hart and Holmstrom for contract theory

It's that time of the year again - Nobel prize awards! After being awarded to a single recipient two years in a row (Deaton in 2015, Tirole in 2014), this year the Nobel in economics is shared by two worthy winners, both relatively unknown outside the economic arena. The reason is that both Oliver Hart from Harvard, and Bengt Holmstrom from MIT, are theorists. What is their area of expertise? Contract theory, arguably the most complex field in modern economics. Which is why this year's prize is another laudable effort to commemorate this very important branch of economic theory for the very first time. 

So what's contract theory all about? Or to be more precise, what was the significance in their contribution? The official statement says the following: "Modern economies are held together by innumerable contracts. The new theoretical tools created by Hart and Holmström are valuable to the understanding of real-life contracts and institutions, as well as potential pitfalls in contract design ... This year’s laureates have developed contract theory, a comprehensive framework for analysing many diverse issues in contractual design, like performance-based pay for top executives, deductibles and co-pays in insurance, and the privatisation of public-sector activities."

Read a more detailed (layman) explanation here, and a more complex, theoretical one here. Also, here is a good text from Tyler Cowen. 

In a nutshell, contract theory studies how, in the presence of asymmetric information (adverse selection, moral hazard, signaling, etc.) do economic agents enter into contractual obligations. How do they create a mutually beneficial deal, that will incentivize both parties to keep their part of the bargain. Theorists examining contract theory try to find the optimal arrangement that will motivate agents to keep their commitment, even when entering the obligation with a veil of uncertainty. In other words, it's a utility maximization exercise, with time discounting and asymmetric information acting as the constraints. Examples of contractual relationships can vary from managers and shareholders, insurance companies and their clients, a firm and its suppliers, or between lenders and borrowers. 

The crucial part in designing an optimal contract structure is the satisfaction of divergent interests. In other words the contractual relationship must be thought of as mutually beneficial, otherwise no one would enter into one. It's a principal-agent game, to be more precise.  The issue is that the principal does not have perfect information on whether the agent is keeping to his side of the deal. A multitude of principal-agent examples have been researched in microeconomics, including labour economics, and political economy, and have mostly been modeled using game theory. One of the most famous principal-agent examples is the labour market: how does the principal (the employer) make sure that the agent (the worker) is not shirking? In a world of limited information the principal can never be too sure in whether or not the agent is performing his job as contractually obliged. In politics, the principals are the voters (the public), whereas the agents are the politicians. The voters "hire" politicians to do the job of running the country, but have very limited oversight into what the politicians are actually doing (notice that in dictatorships it is the dictator who is the principal, since he is accountable to no one). The example that Holmstrom used is the relationship between a company's shareholders (principal) and its CEO (agent) - how do the shareholders make the CEO maximize shareholder value instead of just their own compensation? 

Contract theory goes one step beyond modelling the pure principal-agent game. It is concerned with designing the optimal contractual structure to make sure that both parties benefit from the relationship and that neither has an incentive to shirk. It's all a question of risks vs incentives - which incentives will you offer the agent and at what cost? Examples include everything from performance-based pay structures to the extent of bonuses and stock options given to managers. The other important part, done by Hart, was to distinguish between complete and incomplete contracts, where the former are a theoretical construct, while the later are applicable to realistic settings, where the parties are unable to precisely articulate all the details ex ante. In particular, incomplete contracts are aware of the informational asymmetry and are concerned with the optimal allocation of control rights - which party should be given the decision rights (and how to compensate the other party), or should the decision rights be given to a third party? 

Needless to say the real-life application of these theoretical findings are primarily embedded in law (corporate law to be exact), particularly in the cases of property ownership, control rights, mergers and acquisitions, bankruptcy legislation, financial contracts, etc. This also means that their application is important for designing policy - the optimal bankruptcy laws, or property rights, and even privatization (how to strike a balance between cost reduction and quality). 

All in all, a welcomed award to a fascinating field of research. All the more so given that it was the recipients who designed this theoretical framework for others to build on. Congratulations to Hart and Holmstrom!