The Future of Everything

October 31, 2017

Why economists can’t predict the future

Filed under: Economics, Forecasting — David @ 12:11 pm

NewsweekJapanCover

Cover article in Newsweek Japan on why economists can’t predict the future. Read an extract in Japanese here.

 

Original English version:

 

The quantum physicist Niels Bohr is attributed with the saying that “prediction is hard, especially about the future.” Still, economists seem to have more trouble than most.

For example, mainstream economists uniformly failed to predict the global financial crisis that began in 2007. In fact, that was the case even during the crisis: a study by IMF economists showed the consensus of forecasters in 2008 was that not one of 77 countries considered would be in recession the next year (49 of them were).[1] That is like a weather forecaster saying the storm that is raging outside their window isn’t actually a storm.

In 2014, Haruhiko Kuroda, Governor of the Bank of Japan, predicted that inflation should “reach around the price stability target of 2 percent toward the end of fiscal 2014 through fiscal 2015.”[2] It apparently didn’t get the memo, preferring to remain well under one percent.[3] In Britain, economists confidently predicted that Brexit would cause an immediate economic disaster, which similarly failed to materialise.

This forecasting miss prompted the Bank of England’s Andrew Haldane to call for economics to become more like modern weather forecasting, which has a somewhat better track record at prognostication.[4] So can economists learn from weather forecasters – or is predicting the economy even harder than predicting the weather?

In many respects the comparison with meteorology seems apt, as the two fields have much in common. “Like weather forecasters,” said former Chairman of the US Federal Reserve Ben Bernanke in 2009, “economic forecasters must deal with a system that is extraordinarily complex … and about which our data and understanding will always be imperfect.”[5] The two fields also take a similar mechanistic approach to making predictions – with a few important differences.

Weather models work by dividing the atmosphere up into a 3D grid, and applying Newtonian laws of motion to track its flow. The mathematical models are complicated by things like the formation and dissipation of clouds, which are complex phenomena that can only be approximated by equations. The fact that clouds, and water vapour in general, are one of the most important features of the weather is the main reason weather prediction is so difficult (not the butterfly effect).[6]

Economic models similarly divide the economy into groups or sectors that are modelled with representative consumers and producers, whose homogeneous behaviour is simulated using economic “laws” such as supply and demand. However, unlike the weather which obviously moves around, these “laws” are assumed to drive prices to a stable equilibrium – despite the fact that the word “equilibrium” is hardly what comes to mind when discussing financial storms.

Furthermore, the economy is viewed as a giant barter system, so things like money and debt play no major role – but the global financial crisis was driven by exactly these things. One reason central banks couldn’t predict the 2007 banking crisis was because their model didn’t include banks. And when models do incorporate the effects of money, it is only in the form of “financial frictions” which as the name suggests are minor tweaks that do little to affect the results, and fail to properly reflect the entangled nature of the highly-connected global financial system, where a crisis in one area can propagate instantly across the world.

Predicting the economy using these tools is therefore rather like trying to predict the weather while leaving out water. This omission will seem bizarre to most non-economists, but it makes more sense when we take the subject’s history into account.

Adam Smith, who is usually considered the founding father of economics, assumed that the “invisible hand” of the markets would drive prices of goods or services to reflect their “real intrinsic value” so money was just a distraction.[7] As John Stuart Mill wrote in his 1848 Principles of Political Economy, “There cannot, in short, be intrinsically a more insignificant thing, in the economy of society, than money.”[8] According to Paul Samuelson’s “bible” textbook Economics, “if we strip exchange down to its barest essentials and peel off the obscuring layer of money, we find that trade between individuals and nations largely boils down to barter.”[9]

In the 1950s, economists showed – in what is sometimes called the “invisible hand theorem” – that such a barter economy would reach an optimal equilibrium, subject of course to numerous conditions. In the 1960s, efficient market theory argued that financial markets were instantaneously self-correcting equilibrium systems. The theory was used to develop methods for pricing options (contracts to buy or sell assets at a fixed price in the future) which led to an explosion in the use of these and other financial derivatives.

Today, economists use so-called macroeconomic models, which are the equivalent of weather models, to compute the global economic weather, while continuing to ignore or downplay money, debt, and financial derivatives. Given that the quantitative finance expert Paul Wilmott estimated the notional value of all the financial derivatives in 2010 at $1.2 quadrillion (so $1,200,000,000,000,000) this seems a bit of an oversight – especially since it was exactly these derivatives which were at the heart of the crisis (see our book The Money Formula).[10]

Now again, it may seem strange that economists think they can reliably model the whole economy while leaving out such a large amount of it – but it gets stranger. Because according to theory, not only is money not important, but much of it shouldn’t even exist.

Perhaps the most basic thing about money in a modern capitalist economy is that nearly all of it is produced by private banks, when they make loans. For example, when a bank gives you a mortgage, it doesn’t scrape the money together from deposits – it just makes up brand new funds, which get added to the money supply. But you wouldn’t know this from a training in mainstream economics, which treats the financial sector as little more than an intermediary; or until recently from central banks.

According to economist Richard Werner – who first came up with the idea of quantitative easing for Japan in the 1990s – “The topic of bank credit creation has been a virtual taboo for the thousands of researchers of the world’s central banks during the past half century.”[11] The first to break this taboo was the Bank of England, which created a considerable stir in the financial press in 2014 when it explained that most of the money in circulation – some 97% in the UK – is created by private banks in this way.[12] In 2017 the German Bundesbank agreed that “this refutes a popular misconception that banks act simply as intermediaries at the time of lending – ie that banks can only grant credit using funds placed with them previously as deposits by other customers.”[13]

This money creation process is highly dynamic, because it tends to ramp up during boom times and collapse during recessions, and works “instantaneously and discontinuously” as a Bank of England paper notes (their emphasis), which makes it difficult to incorporate in models.[14] The money thus created often goes into real estate or other speculative investments, so may not show up as inflation. And as Vítor Constâncio of the European Central Bank told his audience in a 2017 speech, its omission helped explain why economists failed to predict the crisis: “In the prevalent macro models, the financial sector was absent, considered to have a remote effect on the real economic activity … This ignored the fact that banks create money by extending credit ex nihilo within the limits of their capital ratio.”[15]

So to summarise, ten years after the crisis, central banks are finally admitting that the reason they didn’t predict it was because their models did not include how money is created or used. This is like a weather forecaster admitting a decade after the storm of the century that they couldn’t have predicted it, even in principle, because they had left out all the wet stuff.

Central bankers are also increasingly admitting that they have no satisfactory model of inflation – but that is obvious, because they have no satisfactory model of money.[16] Their policy of near-zero interest rates has created, not the expected inflation, but only asset bubbles and a destabilising global explosion in private sector debt.

How could we have reached this point? One reason, paradoxically, is that economists are all too familiar with the financial sector (who are happy to be kept out of the picture), not through their models but through consulting gigs and other perks, though they tend to be less than up-front about this. A 2012 study in the Cambridge Journal of Economics observed that, “economists almost never reveal their financial associations when they make public pronouncements on issues such as financial regulation.”[17] It also noted that “Perhaps these connections helped explain why few mainstream economists warned about the oncoming financial crisis.” This is like weather forecasters failing to include water or predict a storm because doing so would upset their sponsors.

Another reason, though, is that it is not possible to simply bolt a financial sector onto existing mainstream models, because as discussed above these are based on a mechanistic paradigm which – in part for ideological reasons – assumes that the actions of independent rational agents drive prices to a stable and optimal equilibrium.[18] Money however has remarkable properties which make it fundamentally incompatible with assumptions such as rationality, stability, efficiency, or indeed the entire mechanistic approach.

As we have seen, the creation or transfer of money is not a smooth or continuous process but takes place “instantaneously and discontinuously” which is as easy to model as a lightning strike. Money and debt act as entangling devices by linking debtors and creditors – and derivatives act as a kind of super-entanglement of the global financial system – which means that we cannot treat the system as made up of independent individuals.

Money is fundamentally dualistic in the sense that it combines the real properties of an owned object, with the virtual properties of number, which is why it can take the form of solid things such as coins, or of virtual money transfers as when you tap your card at a store. These dualistic properties, combining ownership and calculation, are what make it such a psychologically active substance. And prices in the economy are fundamentally indeterminate until measured (you don’t know exactly how much your house is worth until you sell it).[19]

To summarise, money is created and transmitted in discrete parcels, it entangles its users, it is dualistic, and prices are indeterminate. Haven’t we seen this before?

Niel Bohr’s speciality of quantum physics was initially inspired by the observation that at the quantum level matter and energy move not in a continuous fashion, but in discrete leaps and jumps. Pairs of quantum particles can become entangled, so they become part of a unified system, and a measurement on one instantaneously affects its entangled twin – an effect Einstein described as “spooky action at a distance.” Bohr’s “principle of complementarity” says that entities such as electrons behave sometimes like “real” particles, and sometimes like virtual waves. And Heisenberg’s uncertainty principle says that quantitites such as location are fundamentally indeterminate.

Bohr’s contemporary, the English economist John Maynard Keynes wrote in 1926, “We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied.”[20] He was speaking about the economy, but he was inspired also by the developments in physics – he met Einstein, and the title of his General Theory of Employment, Interest and Money was inspired by Einstein’s General Theory of Relativity.

Which leads one to think: if a century ago economics had decided to incorporate some insights from quantum physics instead of aping mechanistic weather models, the economy today might be rather better run.

Or if not, at least we would have a perfect excuse for forecast error: predicting the economy isn’t just harder than predicting the weather, it’s harder than quantum physics.

References

[1] Ahir, H., & Loungani, P. (2014, March). Can economists forecast recessions? Some evidence from the Great Recession. Retrieved from Oracle: forecasters.org/wp/wp-content/uploads/PLoungani_OracleMar2014.pdf.

[2] https://www.boj.or.jp/en/announcements/press/koen_2014/data/ko140320a1.pdf

[3] https://www.reuters.com/article/us-japan-economy-boj-kuroda/bojs-kuroda-still-far-to-go-to-reach-2-percent-inflation-target-idUSKBN18Z2VQ?il=0

[4] Inman, P. (2017, January 5). Chief economist of Bank of England admits errors in Brexit forecasting. The Guardian.

[5] Bernanke, B. (2009, May 22). Commencement address at the Boston College School of Law. Newton, Massachusetts.

[6] Orrell, D. (2007). Apollo’s Arrow: The Science of Prediction and the Future of Everything. Toronto: HarperCollins.

[7] Smith, A. (1776). An Inquiry into the Nature and Causes of the Wealth of Nations. London: W. Strahan & T. Cadell.

[8] Mill, J. S. (1848). Principles of Political Economy. London: Parker.

[9] Samuelson, P. A. (1973). Economics (9th ed.). New York: McGraw-Hill, p. 55.

[10] Wilmott, P., & Orrell, D. (2017). The Money Formula: Dodgy Finance, Pseudo Science, and How Mathematicians Took Over the Markets. Chichester: Wiley.

[11] Werner, R. A. (2016). A lost century in economics: Three theories of banking and the conclusive evidence. International Review of Financial Analysis, 46, 361-379.

[12] McLeay, M., Radia, A., & Thomas, R. (2014, March 14). Money Creation in the Modern Economy. Quarterly Bulletin 2014 Q1. Bank of England.

[13] Deutsche Bundesbank. (2017). How money is created. Retrieved from https://www.bundesbank.de/Redaktion/EN/Topics/2017/2017_04_25_how_money_is_created.html.

[14] Jakab, Z., & Kumhof, M. (2015). Banks are not intermediaries of loanable funds – and why this matters. Bank of England working papers(529), 1.

[15] Constâncio, V. (2017, May 11). Speech at the second ECB Macroprudential Policy and Research Conference, Frankfurt am Main. Retrieved from European Central Bank: https://www.ecb.europa.eu/press/key/date/2017/html/ecb.sp170511.en.html.

[16] Fleming, S. (2017, October 4). Fed has no reliable theory of inflation, says Tarullo. Financial Times. Giles, C. (2017, October 11). Central bankers face a crisis of confidence as models fail . Financial Times.

[17] Carrick-Hagenbarth, J., & Epstein, G. A. (2012). Dangerous interconnectedness: economists’ conflicts of interest, ideology and financial crisis. Cambridge Journal of Economics, 36(1), 43–63.

[18] Orrell, D. (2017). Economyths: 11 Ways That Economics Gets it Wrong. London: Icon Books.

[19] Orrell, D. (2016). A quantum theory of money and value. Economic Thought, 5(2), 19-36; Orrell, D., & Chlupatý, R. (2016). The Evolution of Money. New York: Columbia University Press.

[20] Keynes, 1926.

 

 

Advertisements

October 20, 2017

A Quantum Theory of Money and Value, Part 2: The Uncertainty Principle

Filed under: Economics, Forecasting — Tags: — David @ 4:53 pm

New paper in Economic Thought

Abstract: Economic forecasting is famously unreliable. While this problem has traditionally been blamed on theories such as the efficient market hypothesis or even the butterfly effect, an alternative explanation is the role of money – something which is typically downplayed or excluded altogether from economic models. Instead, models tend to treat the economy as a kind of barter system in which money’s only role is as an inert medium of exchange. Prices are assumed to almost perfectly reflect the ‘intrinsic value’ of an asset. This paper argues, however, that money is better seen as an inherently dualistic phenomenon, which merges precise number with the fuzzy concept of value. Prices are not the optimal result of a mechanical, Newtonian process, but are an emergent property of the money system. And just as quantum physics has its uncertainty principle, so the economy is an uncertain process which can only be approximated by mathematical models. Acknowledging the dynamic and paradoxical qualities of money changes our ontological framework for economic modelling, and for making decisions under uncertainty. Applications to areas of risk analysis, forecasting and modelling are discussed, and it is proposed that a greater appreciation of the fundamental causes of uncertainty will help to make the economy a less uncertain place.

Published in Economic Thought Vol 6, No 2, 2017. Read the full paper here.

February 7, 2017

Big data versus big theory

Filed under: Forecasting — Tags: — David @ 4:05 pm

The Winter 2017 edition of Foresight magazine includes my commentary on the article Changing the Paradigm for Business Forecasting by Michael Gilliland from SAS. Both are behind a paywall (though a longer version of Michael’s argument can be read on his SAS blog), but here is a brief summary.

According to Gilliland, business forecasting is currently dominated by an “offensive” paradigm, which is “characterized by a focus on models, methods, and organizational processes that seek to extract every last fraction of accuracy from our forecasts. More is thought to be better—more data, bigger computers, more complex models—and more elaborate collaborative processes.”

He argues that our “love affair with complexity” can lead to extra effort and cost, while actually reducing forecast accuracy. And while managers have often been seduced by the idea that “big data was going to solve all our forecasting problems”, research shows that even with complex models, forecast accuracy often fails to beat even a no-change forecasting model. His article therefore advocates a paradigm shift towards “defensive” forecasting, which focuses on simplifying the forecasting process, eliminating bad practices, and adding value.

My comment on this (in about 1200 words) is … I agree. But I would argue that the problem is less big data, or even complexity, than big theory.

Our current modelling paradigm is fundamentally reductionist – the idea is to reduce a system to its parts, figure out the laws that govern their interactions, build a giant simulation of the whole thing, and solve. The resulting models are highly complex, and their flexibility makes them good at fitting past data, but they tend to be unstable (or stable in the wrong way) and are poor at making predictions.

If however we recognise that complex systems have emergent properties that resist a reductionist approach, it makes more sense to build models that only attempt to capture some aspect of the system behaviour, instead of reproducing the whole thing.

As an example, consider the question of predicting heart toxicity for new drug compounds, based on ion channel readings. One technique is to employ teams of researchers to build an incredibly complicated mechanistic model of the heart, consisting of hundreds of differential equations, and use the ion channel inputs as inputs. Or you can use a machine learning model. Or, most complicated, you can combine these in a multi-model approach. However my colleague Hitesh Mistry at Systems Forecasting found that a simple model, which simply adds or subtracts the ion channel readings – the only parameters are +1 and -1 – performs just as well as the multi-model approach using three large-scale models plus a machine learning model (see Complexity v Simplicity, the winner is?).

Now, to obtain the simple model Mistry used some fairly sophisticated data analysis tools. But what counts is not the complexity of the methods, but the complexity of the final model. And in general, complexity-based models are often simpler than their reductionist counterparts.

I therefore strongly agree with Michael Gilliland that a “defensive” approach makes sense. But I think the paradigm shift he describes is part of, or related to, a move away from reductionist models, which we are realising don’t work very well for complex systems. With this new paradigm, models will be simpler, but they can also draw on a range of techniques that have developed for the analysis of complex systems.

 

Blog at WordPress.com.