Monday, 17 July 2017

On Schell, Schelling, and Nuclear War

As a mathematical tool, game theory is useful for formalizing our intuitions so we can analyze them systematically. Game theory is most powerful, however, when it shows us that rigorous thinking can lead to counter-intuitive results. In this post I juxtapose two writers—Jonathan Schell, a journalist, and Thomas Schelling, a game theorist—who have thought in incredible depth about one of the gravest threats to mankind’s existence: the possibility of nuclear war.

I first learned about Jonathan Schell by reading his obituary in March 2014.1 Schell authored ‘The Fate of the Earth’ which is, at once, a visceral, historical account of the atomic bombing of Hiroshima and a scientific and philosophical meditation on the possibility of human extinction by nuclear war. The book draws its power by opening with actual accounts of the horrifying effects of an atomic weapon—the fire spread through the city, outpacing its fleeing populace, masses more dying of radiation sickness—then shifting fluidly to hypotheticals in which New York City is attacked with a nuclear weapon. It discusses the predicted sky-scorching effects of all-out nuclear war and dwells on the bleak prospect of a extinction, of an infinite future in which humans are absent from the Universe. Schell’s position—his conclusion—was that the only way to prevent nuclear holocaust was a worldwide movement of nuclear disarmament. As long as nuclear weapons are in existence, the risk of them being used, however infinitesimal, is too high.

If we agree that complete disarmament is a desirable end point (a hotly debated topic), can we actually get there in practice? This is where Schelling comes in. Schelling is known within social science for breakthrough contributions to the analysis of coordination, a thorny corner of game theory where the standard Nash Equilibrium solution concept gives rise to a proliferation of equilibria, and for pioneering the use of computational models to show that small shifts in individual-level preferences can cause large changes in society-scale outcomes.2

The interplay of game theory as a scholarly field and nuclear strategy as a matter of applied international relations goes back a long way. The concept of ‘mutually assured destruction,’ often going by the acronym, MAD, is a game-theoretic one. It basically says neither adversary in a nuclear conflict will employ a first-strike strategy if it knows that the other side will retain the capability to wipe it out through retaliation. The doctrine of has MAD entered the popular discourse, and was parodied perfectly by Kubrick’s Dr. Strangelove.

An interesting—and very practical—corollary of MAD reasoning is explored by Schelling in the Appendix to his 1960 classic ‘The Strategy of Conflict.’ He argues, and shows mathematically, that partial nuclear disarmament is extremely risky. The capability to wipe out an opponent even after one has suffered a pre-emptive strike is what lends the mutually assured destruction set-up its stability. An opponent who fears they will have no capability left with which to retaliate if they are attacked has greater reason to take the risk of initiating the first strike. The upshot of the game-theoretic analysis is the rather counter-intuitive result that partial disarmament is worse than no disarmament at all.

The von Neumann / Schelling / MAD reasoning was based on the Cold War context which basically entailed two largely-symmetric, competing nuclear powers. Game theory also assumes actors behave ‘rationally,’ i.e. each actor is self-interested and forward-looking and assumes that other actors are too. This seems to have been a reasonable assumption for that era.3 As of 2017 it is not clear these same assumptions apply, which is a cause for concern. The ‘players’ in today’s nuclear ‘game’ are not so symmetric, nor is it clear that they will behave as predictably as economists’ rational actors do. It seems that it is for this reason that the Bulletin of the Atomic Scientists has moved its Doomsday Clock to ‘two and a half minutes to midnight,’ its riskiest point since 1953. It is a wise time to revisit the writings of both Schell and Schelling, take seriously this existential threat, and hope that cool heads will prevail.

____________________________
1 Another post on this blog that was first inspired by an obituary is the one discussing the work of James Martin, who passed away in 2013. The following summer I read both Schell’s and Martin’s landmark books. Some of my thoughts on Martin’s ‘The Meaning of the 21st Century’ are recorded here.
2 An excellent analysis of the organizational apparatus underlying the military strategy during the Cuban missile crisis is provided by Graham Allison in his classic, ‘Essence of Decision.’
3 Other posts I've written drawing on Schelling's ideas can be found here and here.


Saturday, 14 May 2016

Alternatives to Growth? Platforms, Modularity and the Circular Economy


The following is an essay I submitted to the St. Gallen Symposium's 'Wings of Excellence' Award; it was selected as a finalist for the award:
The St. Gallen Symposium Leaders of Tomorrow have posed the question, What are alternatives to economic growth? In this essay I draw on ideas from technology strategy and systems theory to put forward a vision for sustainable improvement in human well-being which does not depend on economic growth, as it is currently measured. First, I discuss just why we need a new approach to progress. Then I will describe a new way of thinking about ‘progress’ which transcends the traditional growth-orientation. Three key concepts—platforms, modularity, and the circular economy—suggest ways to create value without transactions, to stimulate innovation at low cost, and to inject sustainability as a design feature of the economy, not an afterthought. After introducing each concept in turn, I discuss the synergies between all three which mean that together they offer a compelling alternative to the present narrow focus on economic growth.
The Challenge
The prevailing paradigm of growth-oriented capitalism has several intrinsic flaws. Here I highlight two.
First, there is the issue of resource sustainability. Much of today’s economic activity is generated roughly as follows: we unearth some raw material from the ground, process it through a multitude of steps, use the finished product, and then throw it away at which point it gets put into landfill. Before the industrial revolution, this system worked because the quantities of materials and waste were miniscule compared to the overall system. Nowadays, due to population growth and rising living standards, we face the very real possibility of finding key resources in short supply.[1] Our waste outputs—in the form of greenhouse gases—are now having geologically significant effects on the planet.[2] As many have observed, perpetual growth is a physical impossibility because of the limitations of the planetary system.[3] Hence, we require an alternative.
Second, there is the issue of poverty. Growth-oriented capitalism has failed to solve the problem that hundreds of millions of people cannot afford many things which those of us in developed countries take for granted—such as food, clean water, housing and household comforts, access to education. ‘Trickle-down economics’ has failed; growth has increasingly benefited those who are already wealthy.[4] Moreover, innovation is directed towards things people or governments in the rich world will pay for, such as smartphones, medical devices, and military hardware. The spending on so-called frugal innovation, to create novel products for the world’s poor, is a fraction of what is spent on high-end innovation. To benefit the majority of mankind, innovations in the future will need to be dramatically lower cost than those of today.
Platforms
The concept of a ‘platform’ has emerged in the last two decades from studies on the economics of technology. In a technological system, a platform is a central component which other complementary components can attach to. For example, in the software world, an operating system (OS) is a platform on which individual pieces of software can be installed; it is the joint package of OS plus software that creates value for users. More abstractly, in market systems a platform may be a central organization with which other individuals and/or organizations interact. For example, eBay is a ‘two-sided’ platform which brings together sellers and buyers of physical goods. In the words of management professor Annabelle Gawer, a platform ‘acts as a foundation upon which other firms can develop complementary products, technologies or services.’[5]
The power of platforms is that they bring together people to allow mutually valued interactions. Some of these may entail transactions—such as a good being sold on eBay—in which case they show up as contributing to economic growth. But much of the time the interactions that platforms facilitate involve no money changing hands. For example the website ‘Quora’ is a platform on which people can post questions or  answers, exchanging valuable knowledge, without any price attached. This can create tremendous value, but does not generate economic growth as measured by GDP.
Platforms benefit from a phenomenon that economists call ‘network externalities:’ the value of joining a platform rises the more other people there are already using it. For example, social media platforms are more attractive to use if they have an active community of users to interact with. This results in dramatically increasing returns to scale, captured by ‘Metcalfe’s law,’ which states that ‘the value of a network goes up as the square of the number of users.’[6] In many cases only a small fraction of this value is accounted for as ‘economic growth’ in national statistics.
Platforms are especially well suited to digital technology, which enables fast, cheap information flows, and makes a platform easy to scale up. Digital platforms make efficient use of raw materials: once a fixed investment is made in hardware, the only ongoing resource a digital platform uses is the electricity to run its servers. Digital platforms therefore create tremendous value with very few natural resources. This makes them an essential pillar in a future that transcends growth-oriented capitalism.
Modularity
The concept of modularity is closely related to the idea of a platform. Modularity is a property of a system that means it is partitioned into constituent parts that have clearly defined interfaces. A product system is modular if its components can be easily swapped out and interchanged with others. For example the traditional PC has a modular architecture: its internal components (e.g. graphics card, sound card) and peripheral components (e.g. keyboard, monitor, mouse) all plug in through standard interfaces and can be individually upgraded.[7] An organization can be said to be modular if it is made up of subdivisions that operate in a relatively self-contained manner, such as the academic departments of a university.
The essence and importance of modularity was first articulated by Herbert Simon in his seminal essay, ‘The Architecture of Complexity.’[8] His observation: a modular architecture allows a system to evolve, through trial-and-error experimentation with alternate components. When a new component enhances the value of the system, it can be retained, and if it detracts from the system it gets discarded. This general observation reads across directly to modularity and evolution of technological products; the modular architecture of the PC is credited with catalyzing innovation in the computer industry.
In a recent essay, Carliss Baldwin and Jason Woodward observe that by their nature platform-based industries exhibit a modular architecture: ‘In essence, a “platform architecture” is a modularization that partitions the system into (1) a set of components whose design is stable and (2) a complementary set of components that are allowed – indeed encouraged – to vary.’[9] Platforms therefore have the potential to be highly ‘evolvable’ systems. They allow new designs and product permutations to be tried out at low cost, with little waste. In other words, platforms can facilitate efficient innovation, enhancing value creation without entailing massive resource expenditures.
The Circular Economy
A third key concept I wish to highlight is the notion of the circular economy. As noted above, our present economic paradigm entails extracting natural resources from the ground, and burying our waste products, which in systems dynamics terms creates an ‘open loop.’ Proponents of a circular economy, such as the Ellen MacArthur Foundation, argue we need to close this loop. In the first instance, we should recycle waste as a source of raw materials. More deeply, we need to redesign our products and our industries to close the resource loop. When a product is decommissioned at the end of its lifespan, not all its components are useless. Many, in fact, may be in a good enough condition to use in a new product, but under the present system they can end up in landfill or in an incinerator. If the original product were designed with disassembly in mind, then retrieving reusable components becomes a real possibility.
The building industry provides an exemplary case study. Construction accounts for around 15% of global greenhouse gas emissions.[10] Construction is carbon intensive because the chemical process for manufacturing cement, an ingredient of concrete, releases large quantities of carbon dioxide. When a concrete structure is demolished—either at the end of its lifespan, or (more commonly) to make space for a newer building—the rubble is typically shipped to landfill. New concrete is then poured, meaning new cement is used and new emissions are generated.[11] Efforts to close this wasteful loop are vitally important, given the need to build quality housing in the rapidly growing urban centers of the world’s emerging economies. One step will be increasing the degree of recycling of old concrete rubble, which can be used as an input to building processes, thereby diverting it from landfill. But the truly ‘circular economy’ approach will entail designing building materials with re-use in mind. Reinforced concrete slabs will be treated as components that can be recovered and reconfigured, instead of scrapped, when a building needs to be replaced. This has been an architectural dream at least since the ‘Metabolist’ movement in post-war Japan, and modern researchers are getting nearer to creating it as a reality.[12]
Synthesis
Individually, these three concepts are each powerful levers to improve quality of life. Together, the complementarity between them makes for an even more potent recipe.
The aim of this essay is to advocate that we move towards a model of capitalism based on circular resource flows and rising quality of life driven by modular innovation. By itself, a circular economy may imply stagnation in living standards. It has echoes of Schumpeter’s ‘circular flow’ in which every year industrial activity looks much like the last.[13] And by itself, evolutionary innovation based on experimentation with modules can be highly resource intensive; we can waste a lot of resources to produce modules we don’t use, and there is a strong temptation to throw out a module once we find a better one. This is clearly visible in the huge amount of electronic waste that developed countries pump out every year.
We need to move towards an industrial infrastructure based on stable long-lasting platforms and interchangeable modular components that can attach to the platform but which themselves conform to a closed-loop production process. This abstract idea can apply in numerous realms, from the now-familiar electronics and software platforms, through manufacturing—using technologies such as 3D printing as the base platform—and built-environment, in which modular skyscrapers could provide a housing solution to the world’s growing urban population. The synergies between platforms, modularity, and a circular economy are several; I enumerate four here:
1.    Economies in design. By letting a common platform underlie a variety of modules, we can avoid wasting the effort of replicating something that has been designed elsewhere. In other words, platforms allow us to converge on a set of common standards, which makes design much more efficient.
2.   Economies in production. With a common underlying platform we obtain economies of scale in the production process for both the platform and the modules. This will play a big role in making innovations accessible to the world’s poor.
3.   Rapid scalability of improvements. When a better design for a module is invented, the use of a common underlying platform will allow the new design to be diffused and adopted widely with great ease. Many new designs will be distributed royalty-free under an ‘open source’ license.
4.   Re-use of modules. Modules can be designed such that they can be disassembled and altered, rather than disposed of, if a better design for that module is developed. This is also a process that benefits from economies of scale in the infrastructure for module renewal.
Consider, by way of illustration, a world with a commonly agreed upon standard for 3D printing, with widely available devices that can print with a small number of specified materials. The material feedstock for the printer would be derived by disassembling used products. The printer is the platform, and the products it makes are the modules. Creative designers anywhere in the world would post designs online that others could download and use: there would be rapid, evolutionary innovation in the modules. Replacing a physical good with the latest, updated model would become much like updating a piece of software today.
Together, platforms, modularity, and the circular economy work in synthesis to make economic activity more environmentally sustainable, and make innovations accessible to the lowest income people on the planet. They offer a compelling alternative to the narrow focus on economic growth that prevails today.
  
References
Bajželj, B., Allwood, J. M., & Cullen, J. M. 2013. Designing climate change mitigation plans that add up. Environmental science & technology, 47(14): 8062-8069.
Baldwin, C. Y., & Woodard, C. J. 2009. The architecture of platforms: A unified view. In A. Gawer (Ed.), Platforms, markets and innovation. Cheltenham, UK: Edward Elgar Publishing.
Bresnahan, T. F., & Greenstein, S. 1999. Technological competition and the structure of the computer industry. The Journal of Industrial Economics, 47(1): 1-40.
Gawer, A. 2009. Platforms, markets and innovation: An introduction. In A. Gawer (Ed.), Platforms, markets and innovation. Cheltenham, UK: Edward Elgar Publishing.
Graedel, T. E., Harper, E. M., Nassar, N. T., Nuss, P., & Reck, B. K. 2015. Criticality of metals and metalloids. Proceedings of the National Academy of Sciences of the United States of America, 112(14): 4257-4262.
Meadows, D., Randers, J., & Meadows, D. 2004. Limits to growth: The 30-year update Chelsea Green Publishing.
Rios, F. C., Chong, W. K., & Grau, D. 2015. Design for disassembly and deconstruction-challenges and opportunities. Procedia Engineering, 118: 1296-1304.
Saez, E., & Zucman, G. 2016. Wealth inequality in the united states since 1913: Evidence from capitalized income tax data. Quarterly Journal of Economics, (forthcoming).
Schumpeter, J. A. 1934. The theory of economic development: An inquiry into profits, capital, credit, interest, and the business cycle. Cambridge, MA: Harvard University Press.
Shapiro, C., & Varian, H. 1999. Information rules Cambridge, MA: Harvard Business School Press.
Simon, H. A. 1962. The architecture of complexity. Proceedings of the American Philosophical Society, 106(6): 467-482.
Waters, C. N., Zalasiewicz, J., Summerhayes, C., Barnosky, A. D., Poirier, C., Gałuszka, A., Cearreta, A., Edgeworth, M., Ellis, E. C., & Ellis, M. 2016. The anthropocene is functionally and stratigraphically distinct from the holocene. Science, 351(6269).



[1] See, for example, Graedel et al. (2015) on metals criticality.
[2] See Waters et al. (2016)
[3] See Meadows, Randers, and Meadows (2004)
[4] For example, since the financial crisis wealth gains in the United States have predominantly gone to the top 0.1% of households in the wealth distribution; average wealth of the bottom 90% of households has fallen (Saez & Zucman, 2016).
[5] Gawer (2009: 2)
[6] Shapiro and Varian (1998: 184)
[7] See Bresnahan and Greenstein (1999)
[8] Simon (1962)
[9] Baldwin and Woodard (2009)
[10] 7.7 Gt of a total 50.6 Gt CO2 equivalent in 2010, see Bajželj, Allwood, and Cullen (2013)
[11] Concrete production has been accelerating, and the scale of production is immense. Geologist Colin Waters and colleagues point out that concrete is now a geologically significant material in the stratigraphy of the planet: ‘The past 20 years (1995–2015) account for more than half of the 50,000 Tg of concrete ever produced, equivalent to ~1 kg m−2 of the planet surface.’  (Waters et al., 2016)
[12] See, for example, Rios, Chong, and Grau (2015)
[13] See chapter 1 of Schumpeter (1934)

Saturday, 26 March 2016

“X-Contingent Loans:” Valuable Innovation or Inevitable Temptation?

The path to hell is paved with good intentions. The best of intentions seem to lie behind a proposal by Montazerhodjat, Weinstock, and Lo that chronically sick people should be able to take out health-contingent loans to finance curative treatments instead of paying repeatedly for short term medicine. The logic behind the suggestion is undeniable: the patient’s financial outcome and health outcomes look better under the hypothetical loan-based model. Just like a renter who prefers to take out a mortgage to buy a home than rent indefinitely, the patient exchanges a perpetual stream of payments for a lump sum of debt that they can eventually pay off. In addition, the loan they propose is contingent on the person’s continued health—if the cure doesn’t work the loan is written off. This aligns the incentives of the pharmaceutical vendor with that of the patient: both want to see the patient permanently cured. In the long-run, if these loans were widely available, more R&D would be directed towards curative medicine rather than temporary or partial treatments.

Similar rumblings can be heard in the discussion around financing higher education. The idea of a “human capital contract” is that a prospective student might agree to pay a percentage of their future income in exchange for having their tuition fees paid up front. It is a financial model that replaces the concept of student debt with one of student equity: by sponsoring my university studies you are “buying shares” in my future prospects. The idea is analysed in depth by Miguel Palacios in his 2004 book “Investing in Human Capital.” Palacios rightly points out that there are ethical and practical dilemmas involved. On the ethical side, the concept of equity in a human being has echoes of slavery or indentured servitude, and no matter what guarantees the sponsor makes to preserve the recipients’ freedom, there will be a risk of exploitative or coercive behavior by sponsors to ensure the recipient takes a higher paying job. Also, the idea of indefinite contracts is rather unpalatable. On the practical side, the proposed contracts suffer from severe problems of adverse selection and moral hazard.1 A more palatable half-way house between plain vanilla loans and human capital contracts is the “income-contingent loan,” which you pay back with some percentage of salary over a given threshold. Income-contingent loans typically include a capped repayment period, after which the remaining balance is written off.

The health-contingent loans to pay for medical treatment and the income-contingent loans to pay for education are part of a broader class of “human capital loans,” which are used to build up the intangible value embodied in the person themselves rather than their tangible assets. They also belong to a category I am calling “X-contingent loans” since the amount that gets repaid depends on how future events unfold. As the two examples suggest, X-contingent loans might provide a route to finance things that increase quality of life but which traditional financial products don’t work for. They may be one of the defining financial innovations of the 21st century, which unlock tremendous value for poor people, whose "liquidity constraint" prevents them from accessing healthcare or higher education. They also pose substantial risks, which are not always acknowledged by their proponents. I will highlight two.

First, we consumers are not good at judging the costs and benefits of complicated financial products. The concept of compound interest is not intuitive, meaning we struggle to deal with our long-run finances, even when consumer financial products have deterministic payment profiles. Once you add “contingencies” into the mix, we become pretty hopeless at judging what is a good deal. Corporations have statisticians on staff to compute probabilities of myriad events occurring, and even they were caught out by the correlated decline in house prices that precipitated the financial crisis of 2008. As everyday consumers we have only the coarsest sense of what the future holds; we make decisions based on gut feel, not on probabilities. When it comes to complicated financial products we are easily misled, and even when a vendor is being transparent (the exception, not the rule these days), we tend to make poor choices. When we make poor choices, we lose out, and ultimately markets may fall apart completely.

To emphasize this point, consider two different consumer financial products with “embedded options.” In the US, in contrast to many other developed nations, most mortgages are fixed-rate. However, mortgage borrowers are typically allowed prepay on their loan. If the prevailing interest rates goes down, it becomes attractive to refinance a mortgage at a lower rate. Mortgages have an implicit call-option on the part of the borrower, which creates “prepayment risk” for lender, vastly complicating the valuation of pools of mortgages that make up mortgage-backed securities. Mortgage providers enlarged the level of complexity by issuing mortgages with attractive starting rates, but complicated tiers of future payments (and some mortgage issuers simply engaged in outright fraud). The widespread, correlated defaults on mortgage payments from 2007 were an unforeseen consequence of the products’ complexity. The US market for automotive leasing also has options embedded by law in the leases: at the end of the lease the lessee can decide to either purchase the car at its pre-specified residual value, or to return it to the lessor.2 Monthly lease payments amortize the difference between the purchase price and the residual value, so lessors can offer attractive deals by using optimistic estimates of residual values. In the early 2000s, this market fell into dysfunction because lessors had systematically overestimated the residual values of vehicles, and far more vehicles were returned to lessors than anticipated. Many banks exited the industry, sustaining hundreds of millions of dollars in losses.

The second big problem with X-contingent loans is that prices respond to the quantity of money available to buy something. This is a fairly intuitive result of the forces of demand and supply; when buyers have more money available, this raises the effective level of demand. This is why central banks lower interest rates to stimulate aggregate demand in the economy, but raise interest rates if excess demand is causing inflation. When we consider introducing a form of credit that is specific one particular consumer good, we should expect a high rate of inflation in the prices for that good. The clearest real-life example is (again) the market for property, with the expansion of credit in the form of larger and easier-to-obtain mortgages. The long-run trend to allow people to borrow increasingly larger multiples of their income has sent property prices skyrocketing. This effect is already in evidence in the US market for higher education, in which tuition fees have risen at roughly three times the rate of inflation, hand-in-hand with the expansion of student loans to pay these fees.3 We can clearly anticipate that if health-contingent loans expand the credit available to pay for treatments, their prices (if left unregulated) will likely rise. This is compounded by the highly inelastic demand for health: people will pay for health at whatever price is asked—it is, after all, a matter of life or death.

All this considered, what can we do? My inclination is to treat X-contingent loans very cautiously. We should move forward slowly rather than rush to welcome this tempting looking new market. And we should consider centralized government control. If we compare the situation with student loans in the US and the UK, neither is perfect, but the presence of private student debt in the US makes for a far more dysfunctional system than in the UK, which already runs on income-contingent loans. Mortgages have made home-ownership a realistic possibility for millions, but have also come close to sinking the global financial system. If we can learn from past mistakes, maybe with X-contingent loans we can expand the benefits of access to healthcare and education, in a humanistic rather than an exploitative way.
_________________________________
 1 In this context, adverse selection occurs because people who expect to earn a lot in future will prefer to take out loans to finance their education, while people who expect to earn less will opt for HCCs. Moral hazard occurs because a person has a lower incentive to take a higher paying job if some percentage of that extra income gets paid to their sponsor.
2 This example is taken from the academic article "Big Losses in Ecosystem Niches: How Core Firm Decisions Drive Complementary Product Shakeouts" by Lamar Pierce (2009).
3 We should not be surprised that bad actors arise to exploit the system, a story that John Oliver recounts with aplomb.

Monday, 21 December 2015

Can Prospect Theory Explain High Start-up Valuations?


Human beings are not particularly good at thinking about probabilities. The last several decades of research in psychology and behavioral economics have unearthed an array of cognitive biases in how we reason about uncertain events. For example, we are prone to misinterpret the results of diagnostic tests, by failing to account for the base rate of a disease in the population. This has enormous implications in the medical field, and may be leading us to over-diagnose and over-prescribe treatments for a variety of illnesses.

One of the foundational theories in behavioral economics is prospect theory, formulated by Daniel Kahneman and Amos Tversky. This theory is most well known for the observation that humans interpret losses as more consequential than gains of the same magnitude. Thus, simply changing the framing of a decision from a ‘gain frame’ to a ‘loss frame’ can make people much more averse to taking risks. In this blog post I want to focus on another prong of prospect theory: the over-weighting of rare events.

Prospect theory suggests—and experimental evidence supports—the idea that people ‘filter’ probabilities: we act as though very low probability events are more likely than they really are (and as though high probability events are less likely than they really are). This helps explain why people fret so much over low probability dangers such as shark attacks and ignore more mundane risks such as accidental falls. It also helps explain why people gamble money on lotteries even when the odds of winning are very slim, and the expected return is negative.

What has this got to do with valuing a start-up? The conventional way to value a company is to make a forecast of its future cash flows, then discount these back to find the ‘net present value’ of its future income. Alternatively, as a heuristic we can apply a multiplier to its earnings based on accepted valuations of other companies. Neither of these works for an emerging venture with a novel business proposition (i.e. your typical Silicon Valley start-up). The future prospects of such a company are shrouded in uncertainty.

Instead of trying to establish the likely path of a given venture’s future cash flows, investors—usually venture capitalists—take a portfolio approach. They pick companies they think will have a chance at becoming massively successful, but realize that many will fail to do so. Each investment is a bit like a lottery ticket. In the classical VC portfolio model, roughly one investment in ten would need to exit at a blockbuster valuation for the overall fund to make a decent return on investment.

In the present wave of technology venture activity, three key things are being done differently to the past. First, the definition of a ‘massively successful exit’ has inflated: ventures now aspire to be ‘unicorns’ with a billion-dollar valuation. Second, investors are spreading their money out, investing in a larger number of ventures. This is most visibly true in accelerator programs, which provide large numbers of nascent ventures with seed funding and mentoring in return for a small equity stake: they explicitly rely on a scattershot approach. Instead of a VC picking ten investments and hoping for two or three large exits, the accelerator approach is to invest in a hundred startup teams and hope for one unicorn. Third, more ventures are staying private for longer, rather than go public through an IPO. As described in this FT article, this allows them to effectively manage their headline valuation figure by giving new investors guaranteed financial returns (risking, in the process, the equity of preceding investors). This prevents negative opinions of the venture’s prospects from being incorporated in its valuation.

And so we have a perfect storm in which valuations are based on someone’s estimate that a given venture will become a unicorn, and—according to prospect theory—they are biased to overestimate how likely this is. For every thousand startups, maybe one of them will be hugely successful, but all of them might be valued as if they have a one-in-a-hundred chance of this success. This is a problem. More fundamentally, we are dealing with such small probabilities that we can easily get them very wrong.

Earlier in the year I considered a few possible mechanisms by which a hypothetical technology bubble might burst. Here, I’ve described one psychological factor that might be behind high startup valuations in the first place. It’s also worth noting that prospect theory can explain rapid changes in investor sentiment. If prices start falling—for example if a bubble shows signs of bursting—investors can switch from a gain mindset to a loss mindset, and immediately become much more risk averse. I hope this doesn’t happen, because the present wave of entrepreneurial activity is generating a lot of innovation. But a wise investor or entrepreneur should be aware that the tide might turn in the near future, and plan accordingly—or risk getting swept away when it does.

Saturday, 31 October 2015

Let Them Eat (Micro)Chips: The Second Machine Age and the Spectre of Technological Unemployment

We are in the midst of the greatest economic upheaval since the industrial revolution. This is the premise of The Second Machine Age by Erik Brynjolfsson and Andrew McAfee, a book discussing the economic implications of present day technological trends. It is an excellent piece, which touches on several topics I have previously explored in this blog, from the trends towards scalability and the consequent ‘winner takes all’ market dynamics, to the deep challenges the information age poses to the measurement of economic growth.

The book has a compelling overarching theme: technology is driving two forces, one positive
for society and one negative. On the one hand technological change is generating an enormous bounty of economic growth. On the other hand, it is also driving increasing spread between rich and poor, and these economic faultlines could undermine the basic fabric of society.

Behind both bounty and spread is the rise of machine intelligence. Machines can take on ever more tasks, even ones that a decade ago we thought would be impossible to automate. The poster child for this is driverless cars. Technology experts used to think driving is so complex that humans would always have an advantage over computers, but the exponential progress of technology has rendered this prediction wrong. Google has been testing driverless cars for several years and Tesla’s Autopilot mode has already made automated vehicles a commercial reality.

One of the discussions in the book I find most compelling is on the subject of technological unemployment. At least since the days of the Luddites, the spectre of machines taking our jobs has worried generations of workers and commanded much attention in social and political science. The prevailing wisdom in the contemporary economics establishment is that technological unemployment is, indeed, a phantom, one we need not worry too much about. The argument goes as follows: while technological change may tear down old industries, it opens up new possibilities, and through the process of entrepreneurial action old ‘factors of production’ can be redeployed to productive uses. People whose skills become obsolete can learn new skills, they just need to be flexible about the type of work they are willing to do.

Brynjolfsson and McAfee make a compelling case that technological unemployment is a legitimate concern. They point to three main reasons:

1.) Rates of change
The argument against technological unemployment rests on the idea that people can adapt and find employment in growth industries. This reasoning holds as long as the rate of adaptation is faster than the rate of technological change. Historically this has been the case: despite the gales of creative destruction blowing strongly, society has adapted.* However, Brynjolfsson and McAfee point out that just because a trend held for 200 years doesn’t mean it holds forever. The rate of technological change has been increasing; can we expect that individuals and the institutions of society will adapt at an ever increasing rate as well?

2.) Elasticity of consumption
Another part of the argument against technological unemployment is that gains in productivity lead to lower prices, which in turn stimulate a higher volume of consumption. This assumption – that in aggregate the long-run “elasticity of demand” is approximately one – would provide an adjustment mechanism if technology continues to raise productivity. The authors point out that if this assumption is wrong, then economic growth would eventually come grinding to a halt.

A corollary of the elasticity of consumption argument, not addressed by the authors, is that it relies on prices going through periods of deflation. Deflation occurred in 1930s US and 1990s Japan, and might have occurred in the 2008-2012 Great Recession if it were not for the unconventional monetary policy of central banks around the world. Avoiding deflation has been celebrated for averting a potentially disastrous depression. But if prices are never allowed to fall, we lose one of the economic mechanisms for adjusting to technological change. I don’t have a clear cut answer to this dilemma, but I would like to see a few more economists discussing this issue.

3.) Floor on wages
This argument is presented with a thought experiment: 
“Imagine that tomorrow a company introduced androids that could do absolutely everything a human worker could do, including building more androids. There’s an endless supply of these robots, and they’re extremely cheap to buy and virtually free to run over time.” 
In this hypothetical end-game scenario, the equilibrium wage for human labor falls to zero. Managed well, we’d be in a Utopia, managed poorly – dystopia.

This is no mere parlor game. The authors also point out that in a digital economy, in which the output of superstars can be freely reproduced, the equilibrium wage for non-superstars is already zero, or close to it. These non-superstars look for work in other sectors, pushing wages down there. Eventually, “If neither the worker nor any entrepreneur can think of a profitable task that requires the worker’s skills and capabilities, then that worker will go unemployed indefinitely.”

Something implicit and horrifying in this last mechanism is that the free market’s solution to an oversupply of workers is starvation. When demand for a typical material good shrinks, its price falls. The supply side of the market adjusts by producing less of it. But when demand for labor shrinks, its price (i.e. the wage rate) may fall, but this doesn’t translate to a lower supply of labor (i.e. a lower population), except through violent means.

The book concludes that governments need to take action in order to make the most of technology’s bounty while minimizing the spread, or at least mitigating some of its worst consequences. Amongst other things, the authors advise greater investment in education (“higher teacher salaries and more accountability”), more support for entrepreneurship, more investment in science, and more progressive taxation, especially raising taxes on those with superstar levels of income.

The authors join a growing minority of commentators who view a Basic Income – a guaranteed minimum income paid by the government – as the best solution to rising inequality.** In the long run this might be the only feasible way to organize an economy which only requires a small, skilled minority to generate most of its economic output.

Technological change bears the potential to benefit everyone, but it also has the potential impoverish the majority while enriching a few. This is not some science fiction future – this is starting to happen today. The warning bells are sounding and now some wise leadership will be needed to steer the ship through the coming storm.

______________________________

* Though not without spells of pain and suffering along the way.
** I find it quite noteworthy that this possibility is now part of mainstream discourse, and I found it even more surprising to learn that it almost became a reality under the Nixon administration

Wednesday, 4 February 2015

How Might the Tech Bubble Burst?

Tech company valuations are through the roof right now, and many people have been questioning whether we are presently living through a tech bubble. In this blog post I set to one side the thorny question of whether we are in a bubble or not. Instead, I go through a thought experiment: I assume that we are in bubble, and play out a few scenarios for ways in which it might burst. Here are my top three:

Scenario 1: A Rising Star Falls
Present tech valuations are only warranted if you believe in the fundamental quality of their management teams. A high level of future growth is already priced into current valuations. This will only be attainable if they can expand their horizons, for example by expanding internationally. Such expansion puts a lot of pressure on organizational infrastructure. It is hard for a Silicon Valley-based headquarters to ensure that every local subsidiary maintains the quality standards they aspire to globally. Businesses who scale up slowly often have problems; those doing it at an accelerated pace (such as Uber) are even more likely to trip up. If too many local scandals mount, the managerial quality of the whole enterprise will get called into question, as will its valuation. The bold, fresh-faced management team will suddenly look hapless and inexperienced, flailing in the midst of a crisis, or riven by internal politics.

We only need look at Enron to see the risks that free-wheeling growth can expose an organization to. Now I am categorically not saying that every tech start-up is an Enron.  waiting to happen. What I am saying is that it could take just one high-tech corporate implosion to cast doubt on all the others. And once investors start to doubt the fundamental managerial quality of these tech ventures the game is up for the whole pack. Let me be totally clear: this line of argument is about perceptions. In the context of venture capital investments ‘risk’ is highly subjective, in other words it is a matter of perceptions. One dramatic fallen star could change the perceptions of investors about the risk of all the other stars, leading to the tech bubble deflating.


Scenario 2: The Advertising Pyramid Collapses
Many tech ventures rely on advertising as their main (or sole) source of revenue. Advertising is the bedrock of the tech sector, worth an estimated $43bn in 2013. It has allowed the industry to evolve in such a way that consumers expect services to be free. Take it away and those apps and websites suddenly don’t look like such appealing investments anymore.

Where do the pressure points lie when it comes to advertising? I see two potential sources of strain. First is the question of the proportion of advertising accounted for by tech companies and websites themselves. Apps tend to display adverts for other apps; websites display adverts for other websites. Webmasters buy ads to drive eyeballs to their site, where they hope people will click on ads. When this kind of behavior occurs, the online sector is feeding on itself – it is autophagous. Here we are in classic bubble or pyramid territory. The pyramid is sustained as long as it draws more people in to play the game, but once it is revealed to be hollow, it vanishes at once. I don’t have data on how much advertising is of this nature – so I can’t truly judge the extent to which this is a problem. However, just from my personal experience of browsing the web it appears that a lot of advertising is of this autophagous nature.

The second pressure point is the difficulty of measuring the return on investment (ROI) of online advertising. Web-based advertising platforms throw off a lot of data. The funnel from views-to-clicks-to-purchases can be tracked, so in principle a marketing manager can attribute a given online sale to a given online advert. However, things are far from simple. A customer who clicked through a pay-per-click advert may have still made exactly the same purchase even if the advert hadn’t been there. And a lot of online advertising is bought primarily to promote offline sales (think of, e.g., car adverts). However the effects on offline sales are much harder to track. Interestingly, while online advertising opens up the possibility of using randomized experiments to measure the effect of adverts, research so far has found that the effect size is so small it is hard to reliably measure from a statistical standpoint.

The upshot of this: the online advertising industry, while huge, is not yet in a long-term, stable equilibrium, and it’s not clear whether the stable market size will be larger or smaller than the market that exists now.


Scenario 3: Silicon Valley Disrupts Itself
To disrupt an industry is the bold aim of many of Silicon Valley’s start ups. It typically entails finding a way to deliver the same service the industry presently delivers but at a fraction of the cost or at a step-change improvement in quality or convenience.* It is often said that to disrupt you need to offer something 10x better than what presently exists, in order to overcome people’s inertia and lock-in with present systems.

The industries targeted for disruption are typically of the staid, old-school variety, perhaps dominated by some entrenched rentiers – think of Uber disrupting the taxi / car industry, or TransferWise disrupting the foreign remittance industry. The narrative of disruption underlies the massive valuations these companies receive. To take Uber, for example, in debates about its valuation have moved from comparisons with the entire global market for taxi services to discussion about how it might act as a subtitute for car ownership.

But there's no particular reason only old industries are vulnerable to disruption. It is perfectly conceivable that one of the current tech giants itself gets disrupted. People appear pretty locked-in to social media platforms such as Facebook, but if a company were to come along with an offer 10x better than one of these, it is easy to see customers switching. And, in fact, many already did: in the last few years plenty of people switched from Facebook to Twitter or Instagram as their primary social media feed, and in future something even better than either of these may come along. Well aware of this fact, Facebook paid a billion dollars for Instagram and $22bn for WhatsApp primarily because of the threat that these businesses posed to its dominance.

I can’t predict what a social media platform 10x better than Facebook would look like, but then neither could I have predicted Facebook’s creation before it existed. And, as with Scenario 1, it only takes one giant to fall asunder for many of the others to lose their appeal to investors.
 

So, there we have it, three scenarios for how the tech bubble might burst. Which do you find the most plausible? What other scenarios sounds realistic? Answers in the comments below! 

_____________________

*Note: this is a fairly colloquial definition of disruption. It is a somewhat warped version of its original academic usage by Clay Christensen, which referred to creation of new performance dimensions for emerging market sub-segments that eventually become large markets in their own right. Here, the colloquial meaning is the one I intend.

Friday, 16 January 2015

Notes from the cutting room floor: "Google has mastered technology, but they need to better understand people"

I wrote the following paragraph on 3rd November, 2013, shortly after a big rise in Google's share price. I never got round to expanding it into a full post. This week the Google Glass developer program was put on hiatus so it seemed like an apt point to dig it up. 

Google has made the news this week for its share price reaching record highs. I’ve been observing the company keenly for many years and greatly admire the technical prowess and creativity of its employees and the vision of its founders and leaders. In the past the company has pioneered a vast array of internet services and is one of the chief reasons why much of what is online is available for free (including this blog). Now it looks as though they plan to be a pioneer in the hardware world, and this is likely to be the greatest challenge they will ever face. There are numerous possible miss-steps, and above all I want to highlight the risk that, while they have mastered technology, they lack an understanding of how it is socially adopted and accepted. This will leave some of their greatest innovations – Google Glass and driverless cars – down a rocky path, with a real chance of outright rejection by society.