The Future of Everything

October 31, 2017

Why economists can’t predict the future

Filed under: Economics, Forecasting — David @ 12:11 pm


Cover article in Newsweek Japan on why economists can’t predict the future. Read an extract in Japanese here.


Original English version:


The quantum physicist Niels Bohr is attributed with the saying that “prediction is hard, especially about the future.” Still, economists seem to have more trouble than most.

For example, mainstream economists uniformly failed to predict the global financial crisis that began in 2007. In fact, that was the case even during the crisis: a study by IMF economists showed the consensus of forecasters in 2008 was that not one of 77 countries considered would be in recession the next year (49 of them were).[1] That is like a weather forecaster saying the storm that is raging outside their window isn’t actually a storm.

In 2014, Haruhiko Kuroda, Governor of the Bank of Japan, predicted that inflation should “reach around the price stability target of 2 percent toward the end of fiscal 2014 through fiscal 2015.”[2] It apparently didn’t get the memo, preferring to remain well under one percent.[3] In Britain, economists confidently predicted that Brexit would cause an immediate economic disaster, which similarly failed to materialise.

This forecasting miss prompted the Bank of England’s Andrew Haldane to call for economics to become more like modern weather forecasting, which has a somewhat better track record at prognostication.[4] So can economists learn from weather forecasters – or is predicting the economy even harder than predicting the weather?

In many respects the comparison with meteorology seems apt, as the two fields have much in common. “Like weather forecasters,” said former Chairman of the US Federal Reserve Ben Bernanke in 2009, “economic forecasters must deal with a system that is extraordinarily complex … and about which our data and understanding will always be imperfect.”[5] The two fields also take a similar mechanistic approach to making predictions – with a few important differences.

Weather models work by dividing the atmosphere up into a 3D grid, and applying Newtonian laws of motion to track its flow. The mathematical models are complicated by things like the formation and dissipation of clouds, which are complex phenomena that can only be approximated by equations. The fact that clouds, and water vapour in general, are one of the most important features of the weather is the main reason weather prediction is so difficult (not the butterfly effect).[6]

Economic models similarly divide the economy into groups or sectors that are modelled with representative consumers and producers, whose homogeneous behaviour is simulated using economic “laws” such as supply and demand. However, unlike the weather which obviously moves around, these “laws” are assumed to drive prices to a stable equilibrium – despite the fact that the word “equilibrium” is hardly what comes to mind when discussing financial storms.

Furthermore, the economy is viewed as a giant barter system, so things like money and debt play no major role – but the global financial crisis was driven by exactly these things. One reason central banks couldn’t predict the 2007 banking crisis was because their model didn’t include banks. And when models do incorporate the effects of money, it is only in the form of “financial frictions” which as the name suggests are minor tweaks that do little to affect the results, and fail to properly reflect the entangled nature of the highly-connected global financial system, where a crisis in one area can propagate instantly across the world.

Predicting the economy using these tools is therefore rather like trying to predict the weather while leaving out water. This omission will seem bizarre to most non-economists, but it makes more sense when we take the subject’s history into account.

Adam Smith, who is usually considered the founding father of economics, assumed that the “invisible hand” of the markets would drive prices of goods or services to reflect their “real intrinsic value” so money was just a distraction.[7] As John Stuart Mill wrote in his 1848 Principles of Political Economy, “There cannot, in short, be intrinsically a more insignificant thing, in the economy of society, than money.”[8] According to Paul Samuelson’s “bible” textbook Economics, “if we strip exchange down to its barest essentials and peel off the obscuring layer of money, we find that trade between individuals and nations largely boils down to barter.”[9]

In the 1950s, economists showed – in what is sometimes called the “invisible hand theorem” – that such a barter economy would reach an optimal equilibrium, subject of course to numerous conditions. In the 1960s, efficient market theory argued that financial markets were instantaneously self-correcting equilibrium systems. The theory was used to develop methods for pricing options (contracts to buy or sell assets at a fixed price in the future) which led to an explosion in the use of these and other financial derivatives.

Today, economists use so-called macroeconomic models, which are the equivalent of weather models, to compute the global economic weather, while continuing to ignore or downplay money, debt, and financial derivatives. Given that the quantitative finance expert Paul Wilmott estimated the notional value of all the financial derivatives in 2010 at $1.2 quadrillion (so $1,200,000,000,000,000) this seems a bit of an oversight – especially since it was exactly these derivatives which were at the heart of the crisis (see our book The Money Formula).[10]

Now again, it may seem strange that economists think they can reliably model the whole economy while leaving out such a large amount of it – but it gets stranger. Because according to theory, not only is money not important, but much of it shouldn’t even exist.

Perhaps the most basic thing about money in a modern capitalist economy is that nearly all of it is produced by private banks, when they make loans. For example, when a bank gives you a mortgage, it doesn’t scrape the money together from deposits – it just makes up brand new funds, which get added to the money supply. But you wouldn’t know this from a training in mainstream economics, which treats the financial sector as little more than an intermediary; or until recently from central banks.

According to economist Richard Werner – who first came up with the idea of quantitative easing for Japan in the 1990s – “The topic of bank credit creation has been a virtual taboo for the thousands of researchers of the world’s central banks during the past half century.”[11] The first to break this taboo was the Bank of England, which created a considerable stir in the financial press in 2014 when it explained that most of the money in circulation – some 97% in the UK – is created by private banks in this way.[12] In 2017 the German Bundesbank agreed that “this refutes a popular misconception that banks act simply as intermediaries at the time of lending – ie that banks can only grant credit using funds placed with them previously as deposits by other customers.”[13]

This money creation process is highly dynamic, because it tends to ramp up during boom times and collapse during recessions, and works “instantaneously and discontinuously” as a Bank of England paper notes (their emphasis), which makes it difficult to incorporate in models.[14] The money thus created often goes into real estate or other speculative investments, so may not show up as inflation. And as Vítor Constâncio of the European Central Bank told his audience in a 2017 speech, its omission helped explain why economists failed to predict the crisis: “In the prevalent macro models, the financial sector was absent, considered to have a remote effect on the real economic activity … This ignored the fact that banks create money by extending credit ex nihilo within the limits of their capital ratio.”[15]

So to summarise, ten years after the crisis, central banks are finally admitting that the reason they didn’t predict it was because their models did not include how money is created or used. This is like a weather forecaster admitting a decade after the storm of the century that they couldn’t have predicted it, even in principle, because they had left out all the wet stuff.

Central bankers are also increasingly admitting that they have no satisfactory model of inflation – but that is obvious, because they have no satisfactory model of money.[16] Their policy of near-zero interest rates has created, not the expected inflation, but only asset bubbles and a destabilising global explosion in private sector debt.

How could we have reached this point? One reason, paradoxically, is that economists are all too familiar with the financial sector (who are happy to be kept out of the picture), not through their models but through consulting gigs and other perks, though they tend to be less than up-front about this. A 2012 study in the Cambridge Journal of Economics observed that, “economists almost never reveal their financial associations when they make public pronouncements on issues such as financial regulation.”[17] It also noted that “Perhaps these connections helped explain why few mainstream economists warned about the oncoming financial crisis.” This is like weather forecasters failing to include water or predict a storm because doing so would upset their sponsors.

Another reason, though, is that it is not possible to simply bolt a financial sector onto existing mainstream models, because as discussed above these are based on a mechanistic paradigm which – in part for ideological reasons – assumes that the actions of independent rational agents drive prices to a stable and optimal equilibrium.[18] Money however has remarkable properties which make it fundamentally incompatible with assumptions such as rationality, stability, efficiency, or indeed the entire mechanistic approach.

As we have seen, the creation or transfer of money is not a smooth or continuous process but takes place “instantaneously and discontinuously” which is as easy to model as a lightning strike. Money and debt act as entangling devices by linking debtors and creditors – and derivatives act as a kind of super-entanglement of the global financial system – which means that we cannot treat the system as made up of independent individuals.

Money is fundamentally dualistic in the sense that it combines the real properties of an owned object, with the virtual properties of number, which is why it can take the form of solid things such as coins, or of virtual money transfers as when you tap your card at a store. These dualistic properties, combining ownership and calculation, are what make it such a psychologically active substance. And prices in the economy are fundamentally indeterminate until measured (you don’t know exactly how much your house is worth until you sell it).[19]

To summarise, money is created and transmitted in discrete parcels, it entangles its users, it is dualistic, and prices are indeterminate. Haven’t we seen this before?

Niel Bohr’s speciality of quantum physics was initially inspired by the observation that at the quantum level matter and energy move not in a continuous fashion, but in discrete leaps and jumps. Pairs of quantum particles can become entangled, so they become part of a unified system, and a measurement on one instantaneously affects its entangled twin – an effect Einstein described as “spooky action at a distance.” Bohr’s “principle of complementarity” says that entities such as electrons behave sometimes like “real” particles, and sometimes like virtual waves. And Heisenberg’s uncertainty principle says that quantitites such as location are fundamentally indeterminate.

Bohr’s contemporary, the English economist John Maynard Keynes wrote in 1926, “We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied.”[20] He was speaking about the economy, but he was inspired also by the developments in physics – he met Einstein, and the title of his General Theory of Employment, Interest and Money was inspired by Einstein’s General Theory of Relativity.

Which leads one to think: if a century ago economics had decided to incorporate some insights from quantum physics instead of aping mechanistic weather models, the economy today might be rather better run.

Or if not, at least we would have a perfect excuse for forecast error: predicting the economy isn’t just harder than predicting the weather, it’s harder than quantum physics.


[1] Ahir, H., & Loungani, P. (2014, March). Can economists forecast recessions? Some evidence from the Great Recession. Retrieved from Oracle:



[4] Inman, P. (2017, January 5). Chief economist of Bank of England admits errors in Brexit forecasting. The Guardian.

[5] Bernanke, B. (2009, May 22). Commencement address at the Boston College School of Law. Newton, Massachusetts.

[6] Orrell, D. (2007). Apollo’s Arrow: The Science of Prediction and the Future of Everything. Toronto: HarperCollins.

[7] Smith, A. (1776). An Inquiry into the Nature and Causes of the Wealth of Nations. London: W. Strahan & T. Cadell.

[8] Mill, J. S. (1848). Principles of Political Economy. London: Parker.

[9] Samuelson, P. A. (1973). Economics (9th ed.). New York: McGraw-Hill, p. 55.

[10] Wilmott, P., & Orrell, D. (2017). The Money Formula: Dodgy Finance, Pseudo Science, and How Mathematicians Took Over the Markets. Chichester: Wiley.

[11] Werner, R. A. (2016). A lost century in economics: Three theories of banking and the conclusive evidence. International Review of Financial Analysis, 46, 361-379.

[12] McLeay, M., Radia, A., & Thomas, R. (2014, March 14). Money Creation in the Modern Economy. Quarterly Bulletin 2014 Q1. Bank of England.

[13] Deutsche Bundesbank. (2017). How money is created. Retrieved from

[14] Jakab, Z., & Kumhof, M. (2015). Banks are not intermediaries of loanable funds – and why this matters. Bank of England working papers(529), 1.

[15] Constâncio, V. (2017, May 11). Speech at the second ECB Macroprudential Policy and Research Conference, Frankfurt am Main. Retrieved from European Central Bank:

[16] Fleming, S. (2017, October 4). Fed has no reliable theory of inflation, says Tarullo. Financial Times. Giles, C. (2017, October 11). Central bankers face a crisis of confidence as models fail . Financial Times.

[17] Carrick-Hagenbarth, J., & Epstein, G. A. (2012). Dangerous interconnectedness: economists’ conflicts of interest, ideology and financial crisis. Cambridge Journal of Economics, 36(1), 43–63.

[18] Orrell, D. (2017). Economyths: 11 Ways That Economics Gets it Wrong. London: Icon Books.

[19] Orrell, D. (2016). A quantum theory of money and value. Economic Thought, 5(2), 19-36; Orrell, D., & Chlupatý, R. (2016). The Evolution of Money. New York: Columbia University Press.

[20] Keynes, 1926.




October 20, 2017

A Quantum Theory of Money and Value, Part 2: The Uncertainty Principle

Filed under: Economics, Forecasting — Tags: — David @ 4:53 pm

New paper in Economic Thought

Abstract: Economic forecasting is famously unreliable. While this problem has traditionally been blamed on theories such as the efficient market hypothesis or even the butterfly effect, an alternative explanation is the role of money – something which is typically downplayed or excluded altogether from economic models. Instead, models tend to treat the economy as a kind of barter system in which money’s only role is as an inert medium of exchange. Prices are assumed to almost perfectly reflect the ‘intrinsic value’ of an asset. This paper argues, however, that money is better seen as an inherently dualistic phenomenon, which merges precise number with the fuzzy concept of value. Prices are not the optimal result of a mechanical, Newtonian process, but are an emergent property of the money system. And just as quantum physics has its uncertainty principle, so the economy is an uncertain process which can only be approximated by mathematical models. Acknowledging the dynamic and paradoxical qualities of money changes our ontological framework for economic modelling, and for making decisions under uncertainty. Applications to areas of risk analysis, forecasting and modelling are discussed, and it is proposed that a greater appreciation of the fundamental causes of uncertainty will help to make the economy a less uncertain place.

Published in Economic Thought Vol 6, No 2, 2017. Read the full paper here.

February 7, 2017

Big data versus big theory

Filed under: Forecasting — Tags: — David @ 4:05 pm

The Winter 2017 edition of Foresight magazine includes my commentary on the article Changing the Paradigm for Business Forecasting by Michael Gilliland from SAS. A longer version of Michael’s argument can be read on his SAS blog, and my response is below.

Michael Gilliland argues convincingly that we need a paradigm shift in forecasting, away from an “offensive” approach that is characterized by a reliance on complicated models, and towards a more “defensive” approach which uses simple but robust models. As he points out, we have been too focussed on developing highly sophisticated models, as opposed to finding something that actually works in an efficient way.

Gilliland notes that part of this comes down to a fondness for complexity. While I agree completely with his conclusion that simple models are usually preferable to complicated models, I would add that the problem is less an obsession with complexity per se, than with building detailed mechanistic models of complexity. And the problem is less big data, than big theory.

The archetype for the model-centric approach is the complex computer models of the atmosphere used in weather forecasting, which were pioneered around 1950 by the mathematician John von Neumann. These weather models divide the atmosphere (and sometimes the oceans) into a three-dimensional grid, and use equations based on principles of fluid flow to compute the flow of air and water. However many key processes, such as the formation and dissipation of clouds, cannot be derived from first principles, so need to be approximated. The result is highly complex models that are prone to model error (the “butterfly effect” is a secondary concern) but still do a reasonable job of predicting the weather a few days ahead. Their success inspired a similar approach in other areas such as economics and biology

The problem comes when these models are pushed to make forecasts beyond their zone of validity, as in climate forecasts. And here, simple models may actually do better. For example, a 2011 study by Fildes and Kourentzes showed that, for a limited set of historical data, a neural network model out-performed the conventional climate model approach; and a combination of a Holt linear trend model with a conventional model led to an improvement of 18 percent in forecast accuracy over a ten-year period.[1]

As the authors noted, while there have been many studies of climate models, “few, if any, studies have made a formal examination of their comparative forecasting accuracy records, which is at the heart of forecasting research.” This is consistent with the idea that complex models are favored, not because they are necessarily better, but for institutional reasons.

Another point shown by this example, though, is that models associated with big data, complexity theory, etc., can actually be simpler than the models associated with the reductionist, mechanistic approach. So for example a neural network model might run happily on a laptop, while a full climate model needs a supercomputer. We therefore need to distinguish between model complexity, and complexity science. A key lesson of complexity science is that many phenomena (e.g. clouds) are emergent properties which are not amenable to a reductionist approach, so simple models may be more appropriate.

Complexity science also changes the way we think about uncertainty. Under the mechanistic paradigm, uncertainty estimates can be determined by making random perturbations to parameters or initial conditions. In weather forecasting, for example, ensemble forecasting ups the complexity level by making multiple forecasts and analysing the spread. A similar approach is taken in economic forecasts. However if error is due to the model being incapable of capturing the complexity of the system, then there is no reason to think that perturbing model inputs will tell you much about the real error (because the model structure is wrong). So again, it may be more appropriate to simply estimate error bounds based on past experience and update them as more information becomes available.

Complexity versus simplicity

An example from a different area is the question of predicting heart toxicity for new drug compounds. Drug makers screen their compounds early in the development cycle by testing to see whether they interfere with several cellular ion channels. One way to predict heart toxicity based on these test results is to employ teams of researchers to build an incredibly complicated mechanistic model of the heart, consisting of hundreds of differential equations, and use the ion channel inputs as inputs. Or you can use a machine learning model. Or, most complicated, you can combine these in a multi-model approach. However my colleague Hitesh Mistry at Systems Forecasting found that a simple model, which simply adds or subtracts the ion channel readings – the only parameters are +1 and -1 – performs just as well as the multi-model approach using three large-scale models plus a machine learning model (see Complexity v Simplicity, the winner is?).

Now, to obtain the simple model Mistry used some fairly sophisticated data analysis tools. But what counts is not the complexity of the methods, but the complexity of the final model. And in general, complexity-based models are often simpler than their reductionist counterparts. Clustering algorithms employ some fancy mathematics, but the end result is clusters, which isn’t a very complicated concept. Even agent-based models, which simulate a system using individual software agents that interact with one another, can involve a relatively small number of parameters if designed carefully.

People who work with big data, meanwhile, are keenly aware of the problem of overfitting – more so it would appear then the designers of reductionist models which often have hundreds of parameters. Perhaps the ultimate example of such models is the dynamic stochastic equilibrium models used in macroeconomics. Studies show that these models have effectively no predictive value (which is why they are not used by e.g. hedge funds), and one reason is that key parameters cannot be determined from data so have to be made up (see The Trouble With Macroeconomics by Paul Romer, chief economist at the World Bank).

One reason we have tended to prefer mechanistic-looking models is that they tell a rational cause-and-effect story. When making a forecast it is common to ask whether a certain effect has been taken into account, and if not, to add it to the model. Business forecasting models may not be as explicitly reductionist as their counterparts in weather forecasting, biology, or economics, but they are still often inspired by the need to tell a consistent story. A disadvantage of models that come out of the complexity approach is that they often appear to be black boxes. For example the equations in a neural network model of the climate system might not tell you much about how the climate works, and sometimes that is what people are really looking for.

When it comes to prediction, as opposed to description, I therefore again agree with Michael Gilliland that a ‘defensive’ approach makes more sense. But I think the paradigm shift he describes is part of, or related to, a move away from reductionist models, which we are realising don’t work very well for complex systems. With this new paradigm, models will be simpler, but they can also draw on a range of techniques that have developed for the analysis of complex systems.

[1] Fildes, R., and N. Kourentzes. “Validation and forecasting accuracy in models of climate change.” International Journal of Forecasting 27 (2011): 968–995.


Blog at