Friday, 10 November 2017

How Fast will the Platform Revolution Proceed? Why Digital Platforms Don’t Work in Education and Healthcare (Yet)


The past two decades have seen the dramatic rise of digital platforms as a revolutionary business model for creating value in a connected world. However, platforms have failed, as of yet, to make inroads into the education and healthcare industries. In this post I explore why.

In economics, a “platform” is any kind of intermediary that brings a large number of others together for a mutually beneficial interaction. Often, but not always, this involves a monetary transaction. For example, an auction house is a platform bringing together buyers and sellers; a taxi company brings together drivers and riders; a newspaper brings together readers and advertisers. Platforms have been around since the birth of trade in ancient village marketplaces. The advent of digitization, the internet, and, more recently, smartphones has allowed a new breed of digital platforms to upend traditional industries. They do this by dramatically lowering the cost of intermediation, and by aggregating large volumes of information to improve the quality of the matches made on the platform. For example, eBay can offer a vastly bigger variety of goods than a physical auction house, along with tools to search through them. Uber can match a rider with a driver dropping off a passenger nearby, and then provide precise directions to the rider’s location based on GPS. Google and Facebook can provide targeted advertisements that increase the odds you will discover goods and services you want. Digital platforms provide such big advantages over traditional business models that they underlie the success all of the “internet giants,” the top five of which are now collectively valued at over $3 trillion.1

In their 2016 book “Platform Revolution,” Geoffrey Parker, Marshall Van Alstyne, and Sangeet Paul Choudary provide a detailed and accessible overview of the strategic decisions firms face when trying to enact a digital platform business model. As a primer in understanding how digitization affects business strategy, it is hard to beat. Over twelve chapters the authors step through the theory of platforms—including the importance of “network effects,” the economic term for the increasing value that platform users gain the more other users there are on the platform—and the practical challenges of building a successful platform, such as the “chicken and egg” problem of getting an initial critical mass of users on board.

The book’s final chapter discusses “the future of the platform revolution.” Why have some industries adopted digital platforms faster than others? Which industries will be revolutionized next? The authors identify four factors that stimulate platform adoption in an industry: information intensity, non-scalable gatekeepers, fragmentation, and extreme information asymmetries. Three factors slowing “platformization” are strong regulatory control, high failure cost, and physical resource intensity. In light of these factors, the authors analyze education, healthcare, energy, finance, and other industries. Why isn’t there (yet) an “Uber for Doctors” with the same level of success as Uber for drivers? The authors argue that the positive drivers are very strong in education and healthcare, and that it is mainly the power of regulators and incumbent suppliers that holds back the transformation of these industries to platforms.

I suggest a further reason why education and healthcare are fundamentally problematic to move onto platforms. It is grounded in a subtle but important distinction made in the economic subfield of industrial organization (IO) between different types of goods. IO economists distinguish between ‘experience goods’ and ‘credence goods.’ Both types of goods are subject to asymmetric information, in the sense that the buyer cannot observe the goods’ quality prior to buying it. The distinction is that with experience goods, the quality is revealed to the buyer after they use the goods—examples include eating at a restaurant or staying at a hotel in a new city. We are uncertain how good the service will be when we make the reservation, but after having experienced the goods we know exactly what the service was like. In contrast, with credence goods we cannot tell the quality even after we’ve experienced the goods. We can only take it on faith that the service was good or the advice was correct. This is a deep form of informational uncertainty. An example would be strategy consulting advice that McKinsey provides to a Fortune 500 CEO. The CEO might follow McKinsey’s recommendation to slim its product line; maybe sales decline slightly but costs decline a lot and profits improve. This could be interpreted as meaning the advice was good. But there are numerous other factors that affect sales and costs. It’s impossible to precisely attribute the outcome to the consultants’ advice.

Platforms work well for experience goods. Very well, in fact. Nowadays I rarely reserve a restaurant or hotel without checking its aggregate reviews on a platform such as Opentable or Tripadvisor. Since other customers have experienced these goods and reported on their experiences, I have a wealth of information to help me make my choices. Importantly, because these are experience goods (and not credence goods), the information in those reports is meaningful. Ratings that platform users make about their past transaction partners are an essential input to platforms such as eBay, Uber, and Airbnb. The ratings create an intermediated system of trust. They make possible interactions with complete strangers, without any other form of accreditation—getting in their cars, staying in their homes, sending them cash—all because past transaction partners can rate past interactions, and thus weed out incompetent and ill-intentioned platform participants.

Critically, platforms don’t work well for credence goods. Users cannot leave informed ratings of the service they received, because they do not know how good it was, even ex post.

The trouble with healthcare and education, then, is that these are credence goods. They rely on such deep tacit knowledge and are surrounded by such uncertainty that even after interacting with healthcare professionals or teachers, we cannot say with any certainty whether they did their job correctly. Of course, we can rate how much we enjoyed the interaction. We can rate how friendly they were. But these things bare little relation to whether they cured our illness or taught us valuable knowledge. Grumpy doctors and stern teachers may nevertheless be effective! A sick person who visits a doctor and gets prescribed medicine may get better or may get worse; in either case it’s unlikely to be possible for the sick person to know whether they would have been better or worse without the medicine, or with a different medicine. A child in school has little sense at that precise moment of whether what they’re learning will be useful to them in later life. Using online platforms to collect ratings in these settings could be worse than useless2—it could lead to service providers prioritizing customers’ subjective sense of customer service over their actual well-being.

This sets up a major limitation to the use of digital platforms in the contexts of education and healthcare. Where Parker, Van Alstyne and Choudary note that platforms can help industries overcome information asymmetries, this should come with a caveat that aggregated consumer ratings work well for experience goods, but not for credence goods. Platforms in healthcare and education may yet be able to overcome information asymmetries in other ways, such as building on existing systems of certification and legitimacy that these industries are built on. In the mean time we should be wary of prescribing the platform pill in cases where it might have harmful side-effects.

_________________
Further reading: I have previously written some reflections on platforms, from a technological angle, which can be found here. The definition of platform used in that essay was different, but the concepts of a technological platform and an economic platform are highly related (e.g. see here for a review).
1 This assertion is based on the following market caps as of 11th November 2017: Apple 897b, Alphabet 720b, Microsoft 647b, Amazon 542b, Facebook 520b. I haven’t exhaustively checked all firms, and note that Alibaba and Tencent could be alternate members of the top 5 (and probably feature in the top seven).
2 Further to the deep uncertainty around quality, there is also a potential selection bias in what ratings are observable. Dead patients don’t leave negative reviews.

Monday, 17 July 2017

On Schell, Schelling, and Nuclear War

As a mathematical tool, game theory is useful for formalizing our intuitions so we can analyze them systematically. Game theory is most powerful, however, when it shows us that rigorous thinking can lead to counter-intuitive results. In this post I juxtapose two writers—Jonathan Schell, a journalist, and Thomas Schelling, a game theorist—who have thought in incredible depth about one of the gravest threats to mankind’s existence: the possibility of nuclear war.

I first learned about Jonathan Schell by reading his obituary in March 2014.1 Schell authored ‘The Fate of the Earth’ which is, at once, a visceral, historical account of the atomic bombing of Hiroshima and a scientific and philosophical meditation on the possibility of human extinction by nuclear war. The book draws its power by opening with actual accounts of the horrifying effects of an atomic weapon—the fire spread through the city, outpacing its fleeing populace, masses more dying of radiation sickness—then shifting fluidly to hypotheticals in which New York City is attacked with a nuclear weapon. It discusses the predicted sky-scorching effects of all-out nuclear war and dwells on the bleak prospect of a extinction, of an infinite future in which humans are absent from the Universe. Schell’s position—his conclusion—was that the only way to prevent nuclear holocaust was a worldwide movement of nuclear disarmament. As long as nuclear weapons are in existence, the risk of them being used, however infinitesimal, is too high.

If we agree that complete disarmament is a desirable end point (a hotly debated topic), can we actually get there in practice? This is where Schelling comes in. Schelling is known within social science for breakthrough contributions to the analysis of coordination, a thorny corner of game theory where the standard Nash Equilibrium solution concept gives rise to a proliferation of equilibria, and for pioneering the use of computational models to show that small shifts in individual-level preferences can cause large changes in society-scale outcomes.2

The interplay of game theory as a scholarly field and nuclear strategy as a matter of applied international relations goes back a long way. The concept of ‘mutually assured destruction,’ often going by the acronym, MAD, is a game-theoretic one. It basically says neither adversary in a nuclear conflict will employ a first-strike strategy if it knows that the other side will retain the capability to wipe it out through retaliation. The doctrine of has MAD entered the popular discourse, and was parodied perfectly by Kubrick’s Dr. Strangelove.

An interesting—and very practical—corollary of MAD reasoning is explored by Schelling in the Appendix to his 1960 classic ‘The Strategy of Conflict.’ He argues, and shows mathematically, that partial nuclear disarmament is extremely risky. The capability to wipe out an opponent even after one has suffered a pre-emptive strike is what lends the mutually assured destruction set-up its stability. An opponent who fears they will have no capability left with which to retaliate if they are attacked has greater reason to take the risk of initiating the first strike. The upshot of the game-theoretic analysis is the rather counter-intuitive result that partial disarmament is worse than no disarmament at all.

The von Neumann / Schelling / MAD reasoning was based on the Cold War context which basically entailed two largely-symmetric, competing nuclear powers. Game theory also assumes actors behave ‘rationally,’ i.e. each actor is self-interested and forward-looking and assumes that other actors are too. This seems to have been a reasonable assumption for that era.3 As of 2017 it is not clear these same assumptions apply, which is a cause for concern. The ‘players’ in today’s nuclear ‘game’ are not so symmetric, nor is it clear that they will behave as predictably as economists’ rational actors do. It seems that it is for this reason that the Bulletin of the Atomic Scientists has moved its Doomsday Clock to ‘two and a half minutes to midnight,’ its riskiest point since 1953. It is a wise time to revisit the writings of both Schell and Schelling, take seriously this existential threat, and hope that cool heads will prevail.

____________________________
1 Another post on this blog that was first inspired by an obituary is the one discussing the work of James Martin, who passed away in 2013. The following summer I read both Schell’s and Martin’s landmark books. Some of my thoughts on Martin’s ‘The Meaning of the 21st Century’ are recorded here.
2 An excellent analysis of the organizational apparatus underlying the military strategy during the Cuban missile crisis is provided by Graham Allison in his classic, ‘Essence of Decision.’
3 Other posts I've written drawing on Schelling's ideas can be found here and here.


Saturday, 14 May 2016

Alternatives to Growth? Platforms, Modularity and the Circular Economy


The following is an essay I submitted to the St. Gallen Symposium's 'Wings of Excellence' Award; it was selected as a finalist for the award:
The St. Gallen Symposium Leaders of Tomorrow have posed the question, What are alternatives to economic growth? In this essay I draw on ideas from technology strategy and systems theory to put forward a vision for sustainable improvement in human well-being which does not depend on economic growth, as it is currently measured. First, I discuss just why we need a new approach to progress. Then I will describe a new way of thinking about ‘progress’ which transcends the traditional growth-orientation. Three key concepts—platforms, modularity, and the circular economy—suggest ways to create value without transactions, to stimulate innovation at low cost, and to inject sustainability as a design feature of the economy, not an afterthought. After introducing each concept in turn, I discuss the synergies between all three which mean that together they offer a compelling alternative to the present narrow focus on economic growth.
The Challenge
The prevailing paradigm of growth-oriented capitalism has several intrinsic flaws. Here I highlight two.
First, there is the issue of resource sustainability. Much of today’s economic activity is generated roughly as follows: we unearth some raw material from the ground, process it through a multitude of steps, use the finished product, and then throw it away at which point it gets put into landfill. Before the industrial revolution, this system worked because the quantities of materials and waste were miniscule compared to the overall system. Nowadays, due to population growth and rising living standards, we face the very real possibility of finding key resources in short supply.[1] Our waste outputs—in the form of greenhouse gases—are now having geologically significant effects on the planet.[2] As many have observed, perpetual growth is a physical impossibility because of the limitations of the planetary system.[3] Hence, we require an alternative.
Second, there is the issue of poverty. Growth-oriented capitalism has failed to solve the problem that hundreds of millions of people cannot afford many things which those of us in developed countries take for granted—such as food, clean water, housing and household comforts, access to education. ‘Trickle-down economics’ has failed; growth has increasingly benefited those who are already wealthy.[4] Moreover, innovation is directed towards things people or governments in the rich world will pay for, such as smartphones, medical devices, and military hardware. The spending on so-called frugal innovation, to create novel products for the world’s poor, is a fraction of what is spent on high-end innovation. To benefit the majority of mankind, innovations in the future will need to be dramatically lower cost than those of today.
Platforms
The concept of a ‘platform’ has emerged in the last two decades from studies on the economics of technology. In a technological system, a platform is a central component which other complementary components can attach to. For example, in the software world, an operating system (OS) is a platform on which individual pieces of software can be installed; it is the joint package of OS plus software that creates value for users. More abstractly, in market systems a platform may be a central organization with which other individuals and/or organizations interact. For example, eBay is a ‘two-sided’ platform which brings together sellers and buyers of physical goods. In the words of management professor Annabelle Gawer, a platform ‘acts as a foundation upon which other firms can develop complementary products, technologies or services.’[5]
The power of platforms is that they bring together people to allow mutually valued interactions. Some of these may entail transactions—such as a good being sold on eBay—in which case they show up as contributing to economic growth. But much of the time the interactions that platforms facilitate involve no money changing hands. For example the website ‘Quora’ is a platform on which people can post questions or  answers, exchanging valuable knowledge, without any price attached. This can create tremendous value, but does not generate economic growth as measured by GDP.
Platforms benefit from a phenomenon that economists call ‘network externalities:’ the value of joining a platform rises the more other people there are already using it. For example, social media platforms are more attractive to use if they have an active community of users to interact with. This results in dramatically increasing returns to scale, captured by ‘Metcalfe’s law,’ which states that ‘the value of a network goes up as the square of the number of users.’[6] In many cases only a small fraction of this value is accounted for as ‘economic growth’ in national statistics.
Platforms are especially well suited to digital technology, which enables fast, cheap information flows, and makes a platform easy to scale up. Digital platforms make efficient use of raw materials: once a fixed investment is made in hardware, the only ongoing resource a digital platform uses is the electricity to run its servers. Digital platforms therefore create tremendous value with very few natural resources. This makes them an essential pillar in a future that transcends growth-oriented capitalism.
Modularity
The concept of modularity is closely related to the idea of a platform. Modularity is a property of a system that means it is partitioned into constituent parts that have clearly defined interfaces. A product system is modular if its components can be easily swapped out and interchanged with others. For example the traditional PC has a modular architecture: its internal components (e.g. graphics card, sound card) and peripheral components (e.g. keyboard, monitor, mouse) all plug in through standard interfaces and can be individually upgraded.[7] An organization can be said to be modular if it is made up of subdivisions that operate in a relatively self-contained manner, such as the academic departments of a university.
The essence and importance of modularity was first articulated by Herbert Simon in his seminal essay, ‘The Architecture of Complexity.’[8] His observation: a modular architecture allows a system to evolve, through trial-and-error experimentation with alternate components. When a new component enhances the value of the system, it can be retained, and if it detracts from the system it gets discarded. This general observation reads across directly to modularity and evolution of technological products; the modular architecture of the PC is credited with catalyzing innovation in the computer industry.
In a recent essay, Carliss Baldwin and Jason Woodward observe that by their nature platform-based industries exhibit a modular architecture: ‘In essence, a “platform architecture” is a modularization that partitions the system into (1) a set of components whose design is stable and (2) a complementary set of components that are allowed – indeed encouraged – to vary.’[9] Platforms therefore have the potential to be highly ‘evolvable’ systems. They allow new designs and product permutations to be tried out at low cost, with little waste. In other words, platforms can facilitate efficient innovation, enhancing value creation without entailing massive resource expenditures.
The Circular Economy
A third key concept I wish to highlight is the notion of the circular economy. As noted above, our present economic paradigm entails extracting natural resources from the ground, and burying our waste products, which in systems dynamics terms creates an ‘open loop.’ Proponents of a circular economy, such as the Ellen MacArthur Foundation, argue we need to close this loop. In the first instance, we should recycle waste as a source of raw materials. More deeply, we need to redesign our products and our industries to close the resource loop. When a product is decommissioned at the end of its lifespan, not all its components are useless. Many, in fact, may be in a good enough condition to use in a new product, but under the present system they can end up in landfill or in an incinerator. If the original product were designed with disassembly in mind, then retrieving reusable components becomes a real possibility.
The building industry provides an exemplary case study. Construction accounts for around 15% of global greenhouse gas emissions.[10] Construction is carbon intensive because the chemical process for manufacturing cement, an ingredient of concrete, releases large quantities of carbon dioxide. When a concrete structure is demolished—either at the end of its lifespan, or (more commonly) to make space for a newer building—the rubble is typically shipped to landfill. New concrete is then poured, meaning new cement is used and new emissions are generated.[11] Efforts to close this wasteful loop are vitally important, given the need to build quality housing in the rapidly growing urban centers of the world’s emerging economies. One step will be increasing the degree of recycling of old concrete rubble, which can be used as an input to building processes, thereby diverting it from landfill. But the truly ‘circular economy’ approach will entail designing building materials with re-use in mind. Reinforced concrete slabs will be treated as components that can be recovered and reconfigured, instead of scrapped, when a building needs to be replaced. This has been an architectural dream at least since the ‘Metabolist’ movement in post-war Japan, and modern researchers are getting nearer to creating it as a reality.[12]
Synthesis
Individually, these three concepts are each powerful levers to improve quality of life. Together, the complementarity between them makes for an even more potent recipe.
The aim of this essay is to advocate that we move towards a model of capitalism based on circular resource flows and rising quality of life driven by modular innovation. By itself, a circular economy may imply stagnation in living standards. It has echoes of Schumpeter’s ‘circular flow’ in which every year industrial activity looks much like the last.[13] And by itself, evolutionary innovation based on experimentation with modules can be highly resource intensive; we can waste a lot of resources to produce modules we don’t use, and there is a strong temptation to throw out a module once we find a better one. This is clearly visible in the huge amount of electronic waste that developed countries pump out every year.
We need to move towards an industrial infrastructure based on stable long-lasting platforms and interchangeable modular components that can attach to the platform but which themselves conform to a closed-loop production process. This abstract idea can apply in numerous realms, from the now-familiar electronics and software platforms, through manufacturing—using technologies such as 3D printing as the base platform—and built-environment, in which modular skyscrapers could provide a housing solution to the world’s growing urban population. The synergies between platforms, modularity, and a circular economy are several; I enumerate four here:
1.    Economies in design. By letting a common platform underlie a variety of modules, we can avoid wasting the effort of replicating something that has been designed elsewhere. In other words, platforms allow us to converge on a set of common standards, which makes design much more efficient.
2.   Economies in production. With a common underlying platform we obtain economies of scale in the production process for both the platform and the modules. This will play a big role in making innovations accessible to the world’s poor.
3.   Rapid scalability of improvements. When a better design for a module is invented, the use of a common underlying platform will allow the new design to be diffused and adopted widely with great ease. Many new designs will be distributed royalty-free under an ‘open source’ license.
4.   Re-use of modules. Modules can be designed such that they can be disassembled and altered, rather than disposed of, if a better design for that module is developed. This is also a process that benefits from economies of scale in the infrastructure for module renewal.
Consider, by way of illustration, a world with a commonly agreed upon standard for 3D printing, with widely available devices that can print with a small number of specified materials. The material feedstock for the printer would be derived by disassembling used products. The printer is the platform, and the products it makes are the modules. Creative designers anywhere in the world would post designs online that others could download and use: there would be rapid, evolutionary innovation in the modules. Replacing a physical good with the latest, updated model would become much like updating a piece of software today.
Together, platforms, modularity, and the circular economy work in synthesis to make economic activity more environmentally sustainable, and make innovations accessible to the lowest income people on the planet. They offer a compelling alternative to the narrow focus on economic growth that prevails today.
  
References
Bajželj, B., Allwood, J. M., & Cullen, J. M. 2013. Designing climate change mitigation plans that add up. Environmental science & technology, 47(14): 8062-8069.
Baldwin, C. Y., & Woodard, C. J. 2009. The architecture of platforms: A unified view. In A. Gawer (Ed.), Platforms, markets and innovation. Cheltenham, UK: Edward Elgar Publishing.
Bresnahan, T. F., & Greenstein, S. 1999. Technological competition and the structure of the computer industry. The Journal of Industrial Economics, 47(1): 1-40.
Gawer, A. 2009. Platforms, markets and innovation: An introduction. In A. Gawer (Ed.), Platforms, markets and innovation. Cheltenham, UK: Edward Elgar Publishing.
Graedel, T. E., Harper, E. M., Nassar, N. T., Nuss, P., & Reck, B. K. 2015. Criticality of metals and metalloids. Proceedings of the National Academy of Sciences of the United States of America, 112(14): 4257-4262.
Meadows, D., Randers, J., & Meadows, D. 2004. Limits to growth: The 30-year update Chelsea Green Publishing.
Rios, F. C., Chong, W. K., & Grau, D. 2015. Design for disassembly and deconstruction-challenges and opportunities. Procedia Engineering, 118: 1296-1304.
Saez, E., & Zucman, G. 2016. Wealth inequality in the united states since 1913: Evidence from capitalized income tax data. Quarterly Journal of Economics, (forthcoming).
Schumpeter, J. A. 1934. The theory of economic development: An inquiry into profits, capital, credit, interest, and the business cycle. Cambridge, MA: Harvard University Press.
Shapiro, C., & Varian, H. 1999. Information rules Cambridge, MA: Harvard Business School Press.
Simon, H. A. 1962. The architecture of complexity. Proceedings of the American Philosophical Society, 106(6): 467-482.
Waters, C. N., Zalasiewicz, J., Summerhayes, C., Barnosky, A. D., Poirier, C., Gałuszka, A., Cearreta, A., Edgeworth, M., Ellis, E. C., & Ellis, M. 2016. The anthropocene is functionally and stratigraphically distinct from the holocene. Science, 351(6269).



[1] See, for example, Graedel et al. (2015) on metals criticality.
[2] See Waters et al. (2016)
[3] See Meadows, Randers, and Meadows (2004)
[4] For example, since the financial crisis wealth gains in the United States have predominantly gone to the top 0.1% of households in the wealth distribution; average wealth of the bottom 90% of households has fallen (Saez & Zucman, 2016).
[5] Gawer (2009: 2)
[6] Shapiro and Varian (1998: 184)
[7] See Bresnahan and Greenstein (1999)
[8] Simon (1962)
[9] Baldwin and Woodard (2009)
[10] 7.7 Gt of a total 50.6 Gt CO2 equivalent in 2010, see Bajželj, Allwood, and Cullen (2013)
[11] Concrete production has been accelerating, and the scale of production is immense. Geologist Colin Waters and colleagues point out that concrete is now a geologically significant material in the stratigraphy of the planet: ‘The past 20 years (1995–2015) account for more than half of the 50,000 Tg of concrete ever produced, equivalent to ~1 kg m−2 of the planet surface.’  (Waters et al., 2016)
[12] See, for example, Rios, Chong, and Grau (2015)
[13] See chapter 1 of Schumpeter (1934)

Saturday, 26 March 2016

“X-Contingent Loans:” Valuable Innovation or Inevitable Temptation?

The path to hell is paved with good intentions. The best of intentions seem to lie behind a proposal by Montazerhodjat, Weinstock, and Lo that chronically sick people should be able to take out health-contingent loans to finance curative treatments instead of paying repeatedly for short term medicine. The logic behind the suggestion is undeniable: the patient’s financial outcome and health outcomes look better under the hypothetical loan-based model. Just like a renter who prefers to take out a mortgage to buy a home than rent indefinitely, the patient exchanges a perpetual stream of payments for a lump sum of debt that they can eventually pay off. In addition, the loan they propose is contingent on the person’s continued health—if the cure doesn’t work the loan is written off. This aligns the incentives of the pharmaceutical vendor with that of the patient: both want to see the patient permanently cured. In the long-run, if these loans were widely available, more R&D would be directed towards curative medicine rather than temporary or partial treatments.

Similar rumblings can be heard in the discussion around financing higher education. The idea of a “human capital contract” is that a prospective student might agree to pay a percentage of their future income in exchange for having their tuition fees paid up front. It is a financial model that replaces the concept of student debt with one of student equity: by sponsoring my university studies you are “buying shares” in my future prospects. The idea is analysed in depth by Miguel Palacios in his 2004 book “Investing in Human Capital.” Palacios rightly points out that there are ethical and practical dilemmas involved. On the ethical side, the concept of equity in a human being has echoes of slavery or indentured servitude, and no matter what guarantees the sponsor makes to preserve the recipients’ freedom, there will be a risk of exploitative or coercive behavior by sponsors to ensure the recipient takes a higher paying job. Also, the idea of indefinite contracts is rather unpalatable. On the practical side, the proposed contracts suffer from severe problems of adverse selection and moral hazard.1 A more palatable half-way house between plain vanilla loans and human capital contracts is the “income-contingent loan,” which you pay back with some percentage of salary over a given threshold. Income-contingent loans typically include a capped repayment period, after which the remaining balance is written off.

The health-contingent loans to pay for medical treatment and the income-contingent loans to pay for education are part of a broader class of “human capital loans,” which are used to build up the intangible value embodied in the person themselves rather than their tangible assets. They also belong to a category I am calling “X-contingent loans” since the amount that gets repaid depends on how future events unfold. As the two examples suggest, X-contingent loans might provide a route to finance things that increase quality of life but which traditional financial products don’t work for. They may be one of the defining financial innovations of the 21st century, which unlock tremendous value for poor people, whose "liquidity constraint" prevents them from accessing healthcare or higher education. They also pose substantial risks, which are not always acknowledged by their proponents. I will highlight two.

First, we consumers are not good at judging the costs and benefits of complicated financial products. The concept of compound interest is not intuitive, meaning we struggle to deal with our long-run finances, even when consumer financial products have deterministic payment profiles. Once you add “contingencies” into the mix, we become pretty hopeless at judging what is a good deal. Corporations have statisticians on staff to compute probabilities of myriad events occurring, and even they were caught out by the correlated decline in house prices that precipitated the financial crisis of 2008. As everyday consumers we have only the coarsest sense of what the future holds; we make decisions based on gut feel, not on probabilities. When it comes to complicated financial products we are easily misled, and even when a vendor is being transparent (the exception, not the rule these days), we tend to make poor choices. When we make poor choices, we lose out, and ultimately markets may fall apart completely.

To emphasize this point, consider two different consumer financial products with “embedded options.” In the US, in contrast to many other developed nations, most mortgages are fixed-rate. However, mortgage borrowers are typically allowed prepay on their loan. If the prevailing interest rates goes down, it becomes attractive to refinance a mortgage at a lower rate. Mortgages have an implicit call-option on the part of the borrower, which creates “prepayment risk” for lender, vastly complicating the valuation of pools of mortgages that make up mortgage-backed securities. Mortgage providers enlarged the level of complexity by issuing mortgages with attractive starting rates, but complicated tiers of future payments (and some mortgage issuers simply engaged in outright fraud). The widespread, correlated defaults on mortgage payments from 2007 were an unforeseen consequence of the products’ complexity. The US market for automotive leasing also has options embedded by law in the leases: at the end of the lease the lessee can decide to either purchase the car at its pre-specified residual value, or to return it to the lessor.2 Monthly lease payments amortize the difference between the purchase price and the residual value, so lessors can offer attractive deals by using optimistic estimates of residual values. In the early 2000s, this market fell into dysfunction because lessors had systematically overestimated the residual values of vehicles, and far more vehicles were returned to lessors than anticipated. Many banks exited the industry, sustaining hundreds of millions of dollars in losses.

The second big problem with X-contingent loans is that prices respond to the quantity of money available to buy something. This is a fairly intuitive result of the forces of demand and supply; when buyers have more money available, this raises the effective level of demand. This is why central banks lower interest rates to stimulate aggregate demand in the economy, but raise interest rates if excess demand is causing inflation. When we consider introducing a form of credit that is specific one particular consumer good, we should expect a high rate of inflation in the prices for that good. The clearest real-life example is (again) the market for property, with the expansion of credit in the form of larger and easier-to-obtain mortgages. The long-run trend to allow people to borrow increasingly larger multiples of their income has sent property prices skyrocketing. This effect is already in evidence in the US market for higher education, in which tuition fees have risen at roughly three times the rate of inflation, hand-in-hand with the expansion of student loans to pay these fees.3 We can clearly anticipate that if health-contingent loans expand the credit available to pay for treatments, their prices (if left unregulated) will likely rise. This is compounded by the highly inelastic demand for health: people will pay for health at whatever price is asked—it is, after all, a matter of life or death.

All this considered, what can we do? My inclination is to treat X-contingent loans very cautiously. We should move forward slowly rather than rush to welcome this tempting looking new market. And we should consider centralized government control. If we compare the situation with student loans in the US and the UK, neither is perfect, but the presence of private student debt in the US makes for a far more dysfunctional system than in the UK, which already runs on income-contingent loans. Mortgages have made home-ownership a realistic possibility for millions, but have also come close to sinking the global financial system. If we can learn from past mistakes, maybe with X-contingent loans we can expand the benefits of access to healthcare and education, in a humanistic rather than an exploitative way.
_________________________________
 1 In this context, adverse selection occurs because people who expect to earn a lot in future will prefer to take out loans to finance their education, while people who expect to earn less will opt for HCCs. Moral hazard occurs because a person has a lower incentive to take a higher paying job if some percentage of that extra income gets paid to their sponsor.
2 This example is taken from the academic article "Big Losses in Ecosystem Niches: How Core Firm Decisions Drive Complementary Product Shakeouts" by Lamar Pierce (2009).
3 We should not be surprised that bad actors arise to exploit the system, a story that John Oliver recounts with aplomb.

Monday, 21 December 2015

Can Prospect Theory Explain High Start-up Valuations?


Human beings are not particularly good at thinking about probabilities. The last several decades of research in psychology and behavioral economics have unearthed an array of cognitive biases in how we reason about uncertain events. For example, we are prone to misinterpret the results of diagnostic tests, by failing to account for the base rate of a disease in the population. This has enormous implications in the medical field, and may be leading us to over-diagnose and over-prescribe treatments for a variety of illnesses.

One of the foundational theories in behavioral economics is prospect theory, formulated by Daniel Kahneman and Amos Tversky. This theory is most well known for the observation that humans interpret losses as more consequential than gains of the same magnitude. Thus, simply changing the framing of a decision from a ‘gain frame’ to a ‘loss frame’ can make people much more averse to taking risks. In this blog post I want to focus on another prong of prospect theory: the over-weighting of rare events.

Prospect theory suggests—and experimental evidence supports—the idea that people ‘filter’ probabilities: we act as though very low probability events are more likely than they really are (and as though high probability events are less likely than they really are). This helps explain why people fret so much over low probability dangers such as shark attacks and ignore more mundane risks such as accidental falls. It also helps explain why people gamble money on lotteries even when the odds of winning are very slim, and the expected return is negative.

What has this got to do with valuing a start-up? The conventional way to value a company is to make a forecast of its future cash flows, then discount these back to find the ‘net present value’ of its future income. Alternatively, as a heuristic we can apply a multiplier to its earnings based on accepted valuations of other companies. Neither of these works for an emerging venture with a novel business proposition (i.e. your typical Silicon Valley start-up). The future prospects of such a company are shrouded in uncertainty.

Instead of trying to establish the likely path of a given venture’s future cash flows, investors—usually venture capitalists—take a portfolio approach. They pick companies they think will have a chance at becoming massively successful, but realize that many will fail to do so. Each investment is a bit like a lottery ticket. In the classical VC portfolio model, roughly one investment in ten would need to exit at a blockbuster valuation for the overall fund to make a decent return on investment.

In the present wave of technology venture activity, three key things are being done differently to the past. First, the definition of a ‘massively successful exit’ has inflated: ventures now aspire to be ‘unicorns’ with a billion-dollar valuation. Second, investors are spreading their money out, investing in a larger number of ventures. This is most visibly true in accelerator programs, which provide large numbers of nascent ventures with seed funding and mentoring in return for a small equity stake: they explicitly rely on a scattershot approach. Instead of a VC picking ten investments and hoping for two or three large exits, the accelerator approach is to invest in a hundred startup teams and hope for one unicorn. Third, more ventures are staying private for longer, rather than go public through an IPO. As described in this FT article, this allows them to effectively manage their headline valuation figure by giving new investors guaranteed financial returns (risking, in the process, the equity of preceding investors). This prevents negative opinions of the venture’s prospects from being incorporated in its valuation.

And so we have a perfect storm in which valuations are based on someone’s estimate that a given venture will become a unicorn, and—according to prospect theory—they are biased to overestimate how likely this is. For every thousand startups, maybe one of them will be hugely successful, but all of them might be valued as if they have a one-in-a-hundred chance of this success. This is a problem. More fundamentally, we are dealing with such small probabilities that we can easily get them very wrong.

Earlier in the year I considered a few possible mechanisms by which a hypothetical technology bubble might burst. Here, I’ve described one psychological factor that might be behind high startup valuations in the first place. It’s also worth noting that prospect theory can explain rapid changes in investor sentiment. If prices start falling—for example if a bubble shows signs of bursting—investors can switch from a gain mindset to a loss mindset, and immediately become much more risk averse. I hope this doesn’t happen, because the present wave of entrepreneurial activity is generating a lot of innovation. But a wise investor or entrepreneur should be aware that the tide might turn in the near future, and plan accordingly—or risk getting swept away when it does.

Saturday, 31 October 2015

Let Them Eat (Micro)Chips: The Second Machine Age and the Spectre of Technological Unemployment

We are in the midst of the greatest economic upheaval since the industrial revolution. This is the premise of The Second Machine Age by Erik Brynjolfsson and Andrew McAfee, a book discussing the economic implications of present day technological trends. It is an excellent piece, which touches on several topics I have previously explored in this blog, from the trends towards scalability and the consequent ‘winner takes all’ market dynamics, to the deep challenges the information age poses to the measurement of economic growth.

The book has a compelling overarching theme: technology is driving two forces, one positive
for society and one negative. On the one hand technological change is generating an enormous bounty of economic growth. On the other hand, it is also driving increasing spread between rich and poor, and these economic faultlines could undermine the basic fabric of society.

Behind both bounty and spread is the rise of machine intelligence. Machines can take on ever more tasks, even ones that a decade ago we thought would be impossible to automate. The poster child for this is driverless cars. Technology experts used to think driving is so complex that humans would always have an advantage over computers, but the exponential progress of technology has rendered this prediction wrong. Google has been testing driverless cars for several years and Tesla’s Autopilot mode has already made automated vehicles a commercial reality.

One of the discussions in the book I find most compelling is on the subject of technological unemployment. At least since the days of the Luddites, the spectre of machines taking our jobs has worried generations of workers and commanded much attention in social and political science. The prevailing wisdom in the contemporary economics establishment is that technological unemployment is, indeed, a phantom, one we need not worry too much about. The argument goes as follows: while technological change may tear down old industries, it opens up new possibilities, and through the process of entrepreneurial action old ‘factors of production’ can be redeployed to productive uses. People whose skills become obsolete can learn new skills, they just need to be flexible about the type of work they are willing to do.

Brynjolfsson and McAfee make a compelling case that technological unemployment is a legitimate concern. They point to three main reasons:

1.) Rates of change
The argument against technological unemployment rests on the idea that people can adapt and find employment in growth industries. This reasoning holds as long as the rate of adaptation is faster than the rate of technological change. Historically this has been the case: despite the gales of creative destruction blowing strongly, society has adapted.* However, Brynjolfsson and McAfee point out that just because a trend held for 200 years doesn’t mean it holds forever. The rate of technological change has been increasing; can we expect that individuals and the institutions of society will adapt at an ever increasing rate as well?

2.) Elasticity of consumption
Another part of the argument against technological unemployment is that gains in productivity lead to lower prices, which in turn stimulate a higher volume of consumption. This assumption – that in aggregate the long-run “elasticity of demand” is approximately one – would provide an adjustment mechanism if technology continues to raise productivity. The authors point out that if this assumption is wrong, then economic growth would eventually come grinding to a halt.

A corollary of the elasticity of consumption argument, not addressed by the authors, is that it relies on prices going through periods of deflation. Deflation occurred in 1930s US and 1990s Japan, and might have occurred in the 2008-2012 Great Recession if it were not for the unconventional monetary policy of central banks around the world. Avoiding deflation has been celebrated for averting a potentially disastrous depression. But if prices are never allowed to fall, we lose one of the economic mechanisms for adjusting to technological change. I don’t have a clear cut answer to this dilemma, but I would like to see a few more economists discussing this issue.

3.) Floor on wages
This argument is presented with a thought experiment: 
“Imagine that tomorrow a company introduced androids that could do absolutely everything a human worker could do, including building more androids. There’s an endless supply of these robots, and they’re extremely cheap to buy and virtually free to run over time.” 
In this hypothetical end-game scenario, the equilibrium wage for human labor falls to zero. Managed well, we’d be in a Utopia, managed poorly – dystopia.

This is no mere parlor game. The authors also point out that in a digital economy, in which the output of superstars can be freely reproduced, the equilibrium wage for non-superstars is already zero, or close to it. These non-superstars look for work in other sectors, pushing wages down there. Eventually, “If neither the worker nor any entrepreneur can think of a profitable task that requires the worker’s skills and capabilities, then that worker will go unemployed indefinitely.”

Something implicit and horrifying in this last mechanism is that the free market’s solution to an oversupply of workers is starvation. When demand for a typical material good shrinks, its price falls. The supply side of the market adjusts by producing less of it. But when demand for labor shrinks, its price (i.e. the wage rate) may fall, but this doesn’t translate to a lower supply of labor (i.e. a lower population), except through violent means.

The book concludes that governments need to take action in order to make the most of technology’s bounty while minimizing the spread, or at least mitigating some of its worst consequences. Amongst other things, the authors advise greater investment in education (“higher teacher salaries and more accountability”), more support for entrepreneurship, more investment in science, and more progressive taxation, especially raising taxes on those with superstar levels of income.

The authors join a growing minority of commentators who view a Basic Income – a guaranteed minimum income paid by the government – as the best solution to rising inequality.** In the long run this might be the only feasible way to organize an economy which only requires a small, skilled minority to generate most of its economic output.

Technological change bears the potential to benefit everyone, but it also has the potential impoverish the majority while enriching a few. This is not some science fiction future – this is starting to happen today. The warning bells are sounding and now some wise leadership will be needed to steer the ship through the coming storm.

______________________________

* Though not without spells of pain and suffering along the way.
** I find it quite noteworthy that this possibility is now part of mainstream discourse, and I found it even more surprising to learn that it almost became a reality under the Nixon administration