The Future of Everything

April 24, 2018

Storm warnings

Filed under: Economics, Forecasting — Tags: — David @ 2:17 pm

“The test of science is its ability to predict”
— Richard Feynman

As if governed by some deterministic law, the current debates between economists and critics follow a predictable path. Critics begin by mentioning the failure of economists to predict or warn of the crisis. Howard Reed for example recently wrote in Prospect that ‘When the great crash hit a decade ago, the public realised that the economics profession was clueless.’

Thus provoked, economists then reply by pointing out that macroeconomic forecasting is only a small part of what economists do, that their models are based on mathematics and logical consistency, and that it is critics who don’t know what they are talking about – as in the riposte by Diane Coyle, which describes Reed’s piece as ‘lamentable’, a ‘caricature’ and an ‘ill-informed diatribe’ that furthermore ignores existing guidelines on what criticism is ‘good’ and ‘bad’ (the former includes ‘The criticism is by an economist’ which doesn’t seem in the multidisciplinary spirit, and would rule out this piece since I am an applied mathematician).

In a similar debate last year in Times Higher Education, Steve Keen wrote that the global financial crisis caught ‘leading economists and policy bodies completely by surprise. A decade later, economics is a divided and lost discipline.’ Christopher Auld responded that ‘Criticism of economics that relegates the field to … failed “weather” forecasting is not just misguided, it is anti-intellectual and dangerous.’

Over in the Guardian, Larry Elliott wrote that ‘Neoclassical economics has become an unquestioned belief system and treats those challenging the creed as dangerous’. A group of economists from the Institute for Fiscal Studies (IFS) appeared to confirm the latter when they called the article ‘dangerous’ and ‘ill-informed expert bashing … Like most economists, we do not try to forecast the date of the next financial crisis, or any other such event. We are not astrologers, nor priests to the market gods. We analyse data.’

The usual excuse offered for failing to predict the crisis, as Robert Lucas put it in 2009, is that ‘simulations were not presented as assurance that no crisis would occur, but as a forecast of what could be expected conditional on a crisis not occurring.’ That is like a weather forecaster saying their forecast was explicitly based on no storms. (The claim here reflects the efficient market-related idea that changes are caused by random external shocks.) Another is that no one else predicted it either – though a number of heterodox economists and others would beg to disagree.

But for both sides, the debate soon veers from discussing the crisis, to the topic of politics, with critics saying that economic models have been exploited by the right, while mainstream economists claim that in fact they are terribly progressive.

I would argue though that the debate isn’t really about politics, and less so about what most economists do with their time (the oft-advertised fact that economists apply their models to many things other than macroeconomic forecasting isn’t necessarily reassuring to a critic). Instead it is one of scientific legitimacy, and it as old as the word ‘forecast’.

Forecast

The argument between orthodox economists and their critics resembles one that occurred in weather forecasting in the mid-nineteenth century, with the establishment of the UK Meteorology Office in 1854 by Admiral Robert FitzRoy.

As captain of the Beagle, FitzRoy had taken Charles Darwin on the trip around the world that sparked Darwin’s theory of evolution. Seeing the potential that weather forecasting had to save lives by warning mariners of storms, he set up a network of 40 weather stations around the UK, with weather reports published in London newspapers.

However FitzRoy’s efforts were not well received, either by the public or the scientific establishment. At the time, weather prediction was something practised by astrologers, and the popular press enjoyed themselves comparing the Met. Office’s inaccurate predictions with those from sources such as ‘Zadkiel’s Almanac’. The mainstream scientists saw this prediction contest as a threat to their reputation, just as economists do today.

FitzRoy tried to blunt the comparison with astrology by avoiding use of loaded words such as ‘prediction’, and instead invented a new word of his own: forecast. ‘Prophecies or predictions they are not; the term forecast is strictly applicable to such an opinion as is the result of a scientific combination and calculation.’ In 1863, he published The Weather Book, which tried to make the weather comprehensible to people of average education. However the attempt to popularise the subject only further annoyed elitist scientific institutions like the Royal Society.

Two years later, FitzRoy took his own life at the age of 59. He may have been affected by his association with Darwin’s theory of evolution, which, as a creationist, he considered blasphemous. However it appears that the primary cause of his depression was being caught between the so-called astro-meteorologists on the one side and the scientific establishment on the other.

After his death, a committee chaired by Darwin’s cousin, Sir Francis Galton, released a report which claimed that his forecasts were ‘wanting in all elements necessary to inspire confidence’ and storm warnings were suspended. However FitzRoy was not without supporters; fishermen, maritime insurers, and the Navy had actually found the warnings useful, and they were reinstated in 1867. And today of course we rarely head out on a trip without checking the forecast.

Deniers

Now, one might think that astrology and creationism would have little to do with a debate over the scientific legitimacy of modern economic forecasting, but it seems that not much has changed. Indeed, in recent years – when most people have found economic forecasts ‘wanting in all elements necessary to inspire confidence’ – economists have been as anxious to draw distinctions between what they see as science and non-science as FitzRoy once was.

Diane Coyle for example writes that Reed’s piece conforms to a list of ‘bad’ criticisms helpfully compiled by Auld, which begins: ‘Every mainstream science which touches on political or religious ideology attracts more than its fair share of deniers: the anti-vaccine crowd v mainstream medicine, GMO fearmongers v geneticists, creationists v biologists, global warming deniers v climatologists. Economics is no different.’ Economist Pieter Gautier similarly explained the existence of heterodox approaches by saying that ‘You also see this happening in the other sciences; in biology you have intelligent design, in climate science you have the climate skeptics.’ Pontus Rendahl told the Financial Times in 2016 that calling for more pluralism in economics is ‘the same argument as the creationists in the US who say that natural selection is just a theory,’ while Michael Ben-Gad said that student groups want to be ‘liberated from neoclassical economics’ dogmatic insistence on internal logic, mathematical rigour and quantification … Still, we do not teach astrology or creationism in our universities, though some students might enjoy them more than physics.’ According to Simon Wren-Lewis, the problem with pluralism is ‘obvious once you make the comparison to medicine. Don’t like the idea of vaccination? Pick an expert from the anti-vaccination medical school.’ Or as the IFS economists put it, ‘We are not astrologers’.

Just as Robert FitzRoy had to defend the Met. Office against astrologers (though not creationists, since he was one), so economists are in a battle for what counts as science – which as Feynman noted has always been associated with the ability to predict. The difference is that, while FitzRoy was distancing himself from actual astrologers, mainstream economists tar a broad range of critics as ‘ill-informed’ and ‘dangerous’ deniers, regardless of their background or experience, in what looks like a kind of Hegelian ‘othering’.

Reality check

However, the issue is not ‘political or religious ideology’, it’s science. Predictions are obviously an important part of the scientific method, and when predictions do go badly wrong, it should serve as a reality check. While providing storm warnings is far from being the only job of economists (and expectations are much lower than in something like meteorology), it is surely one of the most important, which is why central banks for example devote considerable resources to it. And reflexively stamping critics and heterodox economists as anti-scientific deniers doesn’t seem very scientific – especially when some of them actually seem to be better at the prediction stuff.

Of course, as I point out in my forthcoming book Quantum Economics, economics should not be compared directly with weather forecasting. For one thing, the fact that economists’ predictions and models affect the economy (the financial crisis of 2008 for example was in part caused by faulty economic risk models) means that their responsibility is more like that of engineers or doctors. Instead of predicting exactly when the system will crash (no one has ever asked for a precise ‘date’), they should warn of risks, incorporate design features to help avoid failure, know how to address problems when they occur, and be alert for conflicts of interest and ethical violations. The profession’s failings in these areas, rather than any particular forecast, are the real reason so many are calling for a genuinely new paradigm in economics, as opposed to a rehashed version of the old one.

In the meantime, perhaps economists should just follow FitzRoy’s lead and invent a new word for their predictions. Econo-prognostications?

This piece first appeared at openDemocracy on April 24, 2018.

Advertisements

October 31, 2017

Why economists can’t predict the future

Filed under: Economics, Forecasting — David @ 12:11 pm

NewsweekJapanCover

Cover article in Newsweek Japan on why economists can’t predict the future. Read an extract in Japanese here.

 

Original English version:

 

The quantum physicist Niels Bohr is attributed with the saying that “prediction is hard, especially about the future.” Still, economists seem to have more trouble than most.

For example, mainstream economists uniformly failed to predict the global financial crisis that began in 2007. In fact, that was the case even during the crisis: a study by IMF economists showed the consensus of forecasters in 2008 was that not one of 77 countries considered would be in recession the next year (49 of them were).[1] That is like a weather forecaster saying the storm that is raging outside their window isn’t actually a storm.

In 2014, Haruhiko Kuroda, Governor of the Bank of Japan, predicted that inflation should “reach around the price stability target of 2 percent toward the end of fiscal 2014 through fiscal 2015.”[2] It apparently didn’t get the memo, preferring to remain well under one percent.[3] In Britain, economists confidently predicted that Brexit would cause an immediate economic disaster, which similarly failed to materialise.

This forecasting miss prompted the Bank of England’s Andrew Haldane to call for economics to become more like modern weather forecasting, which has a somewhat better track record at prognostication.[4] So can economists learn from weather forecasters – or is predicting the economy even harder than predicting the weather?

In many respects the comparison with meteorology seems apt, as the two fields have much in common. “Like weather forecasters,” said former Chairman of the US Federal Reserve Ben Bernanke in 2009, “economic forecasters must deal with a system that is extraordinarily complex … and about which our data and understanding will always be imperfect.”[5] The two fields also take a similar mechanistic approach to making predictions – with a few important differences.

Weather models work by dividing the atmosphere up into a 3D grid, and applying Newtonian laws of motion to track its flow. The mathematical models are complicated by things like the formation and dissipation of clouds, which are complex phenomena that can only be approximated by equations. The fact that clouds, and water vapour in general, are one of the most important features of the weather is the main reason weather prediction is so difficult (not the butterfly effect).[6]

Economic models similarly divide the economy into groups or sectors that are modelled with representative consumers and producers, whose homogeneous behaviour is simulated using economic “laws” such as supply and demand. However, unlike the weather which obviously moves around, these “laws” are assumed to drive prices to a stable equilibrium – despite the fact that the word “equilibrium” is hardly what comes to mind when discussing financial storms.

Furthermore, the economy is viewed as a giant barter system, so things like money and debt play no major role – but the global financial crisis was driven by exactly these things. One reason central banks couldn’t predict the 2007 banking crisis was because their model didn’t include banks. And when models do incorporate the effects of money, it is only in the form of “financial frictions” which as the name suggests are minor tweaks that do little to affect the results, and fail to properly reflect the entangled nature of the highly-connected global financial system, where a crisis in one area can propagate instantly across the world.

Predicting the economy using these tools is therefore rather like trying to predict the weather while leaving out water. This omission will seem bizarre to most non-economists, but it makes more sense when we take the subject’s history into account.

Adam Smith, who is usually considered the founding father of economics, assumed that the “invisible hand” of the markets would drive prices of goods or services to reflect their “real intrinsic value” so money was just a distraction.[7] As John Stuart Mill wrote in his 1848 Principles of Political Economy, “There cannot, in short, be intrinsically a more insignificant thing, in the economy of society, than money.”[8] According to Paul Samuelson’s “bible” textbook Economics, “if we strip exchange down to its barest essentials and peel off the obscuring layer of money, we find that trade between individuals and nations largely boils down to barter.”[9]

In the 1950s, economists showed – in what is sometimes called the “invisible hand theorem” – that such a barter economy would reach an optimal equilibrium, subject of course to numerous conditions. In the 1960s, efficient market theory argued that financial markets were instantaneously self-correcting equilibrium systems. The theory was used to develop methods for pricing options (contracts to buy or sell assets at a fixed price in the future) which led to an explosion in the use of these and other financial derivatives.

Today, economists use so-called macroeconomic models, which are the equivalent of weather models, to compute the global economic weather, while continuing to ignore or downplay money, debt, and financial derivatives. Given that the quantitative finance expert Paul Wilmott estimated the notional value of all the financial derivatives in 2010 at $1.2 quadrillion (so $1,200,000,000,000,000) this seems a bit of an oversight – especially since it was exactly these derivatives which were at the heart of the crisis (see our book The Money Formula).[10]

Now again, it may seem strange that economists think they can reliably model the whole economy while leaving out such a large amount of it – but it gets stranger. Because according to theory, not only is money not important, but much of it shouldn’t even exist.

Perhaps the most basic thing about money in a modern capitalist economy is that nearly all of it is produced by private banks, when they make loans. For example, when a bank gives you a mortgage, it doesn’t scrape the money together from deposits – it just makes up brand new funds, which get added to the money supply. But you wouldn’t know this from a training in mainstream economics, which treats the financial sector as little more than an intermediary; or until recently from central banks.

According to economist Richard Werner – who first came up with the idea of quantitative easing for Japan in the 1990s – “The topic of bank credit creation has been a virtual taboo for the thousands of researchers of the world’s central banks during the past half century.”[11] The first to break this taboo was the Bank of England, which created a considerable stir in the financial press in 2014 when it explained that most of the money in circulation – some 97% in the UK – is created by private banks in this way.[12] In 2017 the German Bundesbank agreed that “this refutes a popular misconception that banks act simply as intermediaries at the time of lending – ie that banks can only grant credit using funds placed with them previously as deposits by other customers.”[13]

This money creation process is highly dynamic, because it tends to ramp up during boom times and collapse during recessions, and works “instantaneously and discontinuously” as a Bank of England paper notes (their emphasis), which makes it difficult to incorporate in models.[14] The money thus created often goes into real estate or other speculative investments, so may not show up as inflation. And as Vítor Constâncio of the European Central Bank told his audience in a 2017 speech, its omission helped explain why economists failed to predict the crisis: “In the prevalent macro models, the financial sector was absent, considered to have a remote effect on the real economic activity … This ignored the fact that banks create money by extending credit ex nihilo within the limits of their capital ratio.”[15]

So to summarise, ten years after the crisis, central banks are finally admitting that the reason they didn’t predict it was because their models did not include how money is created or used. This is like a weather forecaster admitting a decade after the storm of the century that they couldn’t have predicted it, even in principle, because they had left out all the wet stuff.

Central bankers are also increasingly admitting that they have no satisfactory model of inflation – but that is obvious, because they have no satisfactory model of money.[16] Their policy of near-zero interest rates has created, not the expected inflation, but only asset bubbles and a destabilising global explosion in private sector debt.

How could we have reached this point? One reason, paradoxically, is that economists are all too familiar with the financial sector (who are happy to be kept out of the picture), not through their models but through consulting gigs and other perks, though they tend to be less than up-front about this. A 2012 study in the Cambridge Journal of Economics observed that, “economists almost never reveal their financial associations when they make public pronouncements on issues such as financial regulation.”[17] It also noted that “Perhaps these connections helped explain why few mainstream economists warned about the oncoming financial crisis.” This is like weather forecasters failing to include water or predict a storm because doing so would upset their sponsors.

Another reason, though, is that it is not possible to simply bolt a financial sector onto existing mainstream models, because as discussed above these are based on a mechanistic paradigm which – in part for ideological reasons – assumes that the actions of independent rational agents drive prices to a stable and optimal equilibrium.[18] Money however has remarkable properties which make it fundamentally incompatible with assumptions such as rationality, stability, efficiency, or indeed the entire mechanistic approach.

As we have seen, the creation or transfer of money is not a smooth or continuous process but takes place “instantaneously and discontinuously” which is as easy to model as a lightning strike. Money and debt act as entangling devices by linking debtors and creditors – and derivatives act as a kind of super-entanglement of the global financial system – which means that we cannot treat the system as made up of independent individuals.

Money is fundamentally dualistic in the sense that it combines the real properties of an owned object, with the virtual properties of number, which is why it can take the form of solid things such as coins, or of virtual money transfers as when you tap your card at a store. These dualistic properties, combining ownership and calculation, are what make it such a psychologically active substance. And prices in the economy are fundamentally indeterminate until measured (you don’t know exactly how much your house is worth until you sell it).[19]

To summarise, money is created and transmitted in discrete parcels, it entangles its users, it is dualistic, and prices are indeterminate. Haven’t we seen this before?

Niel Bohr’s speciality of quantum physics was initially inspired by the observation that at the quantum level matter and energy move not in a continuous fashion, but in discrete leaps and jumps. Pairs of quantum particles can become entangled, so they become part of a unified system, and a measurement on one instantaneously affects its entangled twin – an effect Einstein described as “spooky action at a distance.” Bohr’s “principle of complementarity” says that entities such as electrons behave sometimes like “real” particles, and sometimes like virtual waves. And Heisenberg’s uncertainty principle says that quantitites such as location are fundamentally indeterminate.

Bohr’s contemporary, the English economist John Maynard Keynes wrote in 1926, “We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied.”[20] He was speaking about the economy, but he was inspired also by the developments in physics – he met Einstein, and the title of his General Theory of Employment, Interest and Money was inspired by Einstein’s General Theory of Relativity.

Which leads one to think: if a century ago economics had decided to incorporate some insights from quantum physics instead of aping mechanistic weather models, the economy today might be rather better run.

Or if not, at least we would have a perfect excuse for forecast error: predicting the economy isn’t just harder than predicting the weather, it’s harder than quantum physics.

References

[1] Ahir, H., & Loungani, P. (2014, March). Can economists forecast recessions? Some evidence from the Great Recession. Retrieved from Oracle: forecasters.org/wp/wp-content/uploads/PLoungani_OracleMar2014.pdf.

[2] https://www.boj.or.jp/en/announcements/press/koen_2014/data/ko140320a1.pdf

[3] https://www.reuters.com/article/us-japan-economy-boj-kuroda/bojs-kuroda-still-far-to-go-to-reach-2-percent-inflation-target-idUSKBN18Z2VQ?il=0

[4] Inman, P. (2017, January 5). Chief economist of Bank of England admits errors in Brexit forecasting. The Guardian.

[5] Bernanke, B. (2009, May 22). Commencement address at the Boston College School of Law. Newton, Massachusetts.

[6] Orrell, D. (2007). Apollo’s Arrow: The Science of Prediction and the Future of Everything. Toronto: HarperCollins.

[7] Smith, A. (1776). An Inquiry into the Nature and Causes of the Wealth of Nations. London: W. Strahan & T. Cadell.

[8] Mill, J. S. (1848). Principles of Political Economy. London: Parker.

[9] Samuelson, P. A. (1973). Economics (9th ed.). New York: McGraw-Hill, p. 55.

[10] Wilmott, P., & Orrell, D. (2017). The Money Formula: Dodgy Finance, Pseudo Science, and How Mathematicians Took Over the Markets. Chichester: Wiley.

[11] Werner, R. A. (2016). A lost century in economics: Three theories of banking and the conclusive evidence. International Review of Financial Analysis, 46, 361-379.

[12] McLeay, M., Radia, A., & Thomas, R. (2014, March 14). Money Creation in the Modern Economy. Quarterly Bulletin 2014 Q1. Bank of England.

[13] Deutsche Bundesbank. (2017). How money is created. Retrieved from https://www.bundesbank.de/Redaktion/EN/Topics/2017/2017_04_25_how_money_is_created.html.

[14] Jakab, Z., & Kumhof, M. (2015). Banks are not intermediaries of loanable funds – and why this matters. Bank of England working papers(529), 1.

[15] Constâncio, V. (2017, May 11). Speech at the second ECB Macroprudential Policy and Research Conference, Frankfurt am Main. Retrieved from European Central Bank: https://www.ecb.europa.eu/press/key/date/2017/html/ecb.sp170511.en.html.

[16] Fleming, S. (2017, October 4). Fed has no reliable theory of inflation, says Tarullo. Financial Times. Giles, C. (2017, October 11). Central bankers face a crisis of confidence as models fail . Financial Times.

[17] Carrick-Hagenbarth, J., & Epstein, G. A. (2012). Dangerous interconnectedness: economists’ conflicts of interest, ideology and financial crisis. Cambridge Journal of Economics, 36(1), 43–63.

[18] Orrell, D. (2017). Economyths: 11 Ways That Economics Gets it Wrong. London: Icon Books.

[19] Orrell, D. (2016). A quantum theory of money and value. Economic Thought, 5(2), 19-36; Orrell, D., & Chlupatý, R. (2016). The Evolution of Money. New York: Columbia University Press.

[20] Keynes, 1926.

 

 

October 20, 2017

A Quantum Theory of Money and Value, Part 2: The Uncertainty Principle

Filed under: Economics, Forecasting — Tags: — David @ 4:53 pm

New paper in Economic Thought

Abstract: Economic forecasting is famously unreliable. While this problem has traditionally been blamed on theories such as the efficient market hypothesis or even the butterfly effect, an alternative explanation is the role of money – something which is typically downplayed or excluded altogether from economic models. Instead, models tend to treat the economy as a kind of barter system in which money’s only role is as an inert medium of exchange. Prices are assumed to almost perfectly reflect the ‘intrinsic value’ of an asset. This paper argues, however, that money is better seen as an inherently dualistic phenomenon, which merges precise number with the fuzzy concept of value. Prices are not the optimal result of a mechanical, Newtonian process, but are an emergent property of the money system. And just as quantum physics has its uncertainty principle, so the economy is an uncertain process which can only be approximated by mathematical models. Acknowledging the dynamic and paradoxical qualities of money changes our ontological framework for economic modelling, and for making decisions under uncertainty. Applications to areas of risk analysis, forecasting and modelling are discussed, and it is proposed that a greater appreciation of the fundamental causes of uncertainty will help to make the economy a less uncertain place.

Published in Economic Thought Vol 6, No 2, 2017. Read the full paper here.

February 7, 2017

Big data versus big theory

Filed under: Forecasting — Tags: — David @ 4:05 pm

The Winter 2017 edition of Foresight magazine includes my commentary on the article Changing the Paradigm for Business Forecasting by Michael Gilliland from SAS. A longer version of Michael’s argument can be read on his SAS blog, and my response is below.

Michael Gilliland argues convincingly that we need a paradigm shift in forecasting, away from an “offensive” approach that is characterized by a reliance on complicated models, and towards a more “defensive” approach which uses simple but robust models. As he points out, we have been too focussed on developing highly sophisticated models, as opposed to finding something that actually works in an efficient way.

Gilliland notes that part of this comes down to a fondness for complexity. While I agree completely with his conclusion that simple models are usually preferable to complicated models, I would add that the problem is less an obsession with complexity per se, than with building detailed mechanistic models of complexity. And the problem is less big data, than big theory.

The archetype for the model-centric approach is the complex computer models of the atmosphere used in weather forecasting, which were pioneered around 1950 by the mathematician John von Neumann. These weather models divide the atmosphere (and sometimes the oceans) into a three-dimensional grid, and use equations based on principles of fluid flow to compute the flow of air and water. However many key processes, such as the formation and dissipation of clouds, cannot be derived from first principles, so need to be approximated. The result is highly complex models that are prone to model error (the “butterfly effect” is a secondary concern) but still do a reasonable job of predicting the weather a few days ahead. Their success inspired a similar approach in other areas such as economics and biology

The problem comes when these models are pushed to make forecasts beyond their zone of validity, as in climate forecasts. And here, simple models may actually do better. For example, a 2011 study by Fildes and Kourentzes showed that, for a limited set of historical data, a neural network model out-performed the conventional climate model approach; and a combination of a Holt linear trend model with a conventional model led to an improvement of 18 percent in forecast accuracy over a ten-year period.[1]

As the authors noted, while there have been many studies of climate models, “few, if any, studies have made a formal examination of their comparative forecasting accuracy records, which is at the heart of forecasting research.” This is consistent with the idea that complex models are favored, not because they are necessarily better, but for institutional reasons.

Another point shown by this example, though, is that models associated with big data, complexity theory, etc., can actually be simpler than the models associated with the reductionist, mechanistic approach. So for example a neural network model might run happily on a laptop, while a full climate model needs a supercomputer. We therefore need to distinguish between model complexity, and complexity science. A key lesson of complexity science is that many phenomena (e.g. clouds) are emergent properties which are not amenable to a reductionist approach, so simple models may be more appropriate.

Complexity science also changes the way we think about uncertainty. Under the mechanistic paradigm, uncertainty estimates can be determined by making random perturbations to parameters or initial conditions. In weather forecasting, for example, ensemble forecasting ups the complexity level by making multiple forecasts and analysing the spread. A similar approach is taken in economic forecasts. However if error is due to the model being incapable of capturing the complexity of the system, then there is no reason to think that perturbing model inputs will tell you much about the real error (because the model structure is wrong). So again, it may be more appropriate to simply estimate error bounds based on past experience and update them as more information becomes available.

Complexity versus simplicity

An example from a different area is the question of predicting heart toxicity for new drug compounds. Drug makers screen their compounds early in the development cycle by testing to see whether they interfere with several cellular ion channels. One way to predict heart toxicity based on these test results is to employ teams of researchers to build an incredibly complicated mechanistic model of the heart, consisting of hundreds of differential equations, and use the ion channel inputs as inputs. Or you can use a machine learning model. Or, most complicated, you can combine these in a multi-model approach. However my colleague Hitesh Mistry at Systems Forecasting found that a simple model, which simply adds or subtracts the ion channel readings – the only parameters are +1 and -1 – performs just as well as the multi-model approach using three large-scale models plus a machine learning model (see Complexity v Simplicity, the winner is?).

Now, to obtain the simple model Mistry used some fairly sophisticated data analysis tools. But what counts is not the complexity of the methods, but the complexity of the final model. And in general, complexity-based models are often simpler than their reductionist counterparts. Clustering algorithms employ some fancy mathematics, but the end result is clusters, which isn’t a very complicated concept. Even agent-based models, which simulate a system using individual software agents that interact with one another, can involve a relatively small number of parameters if designed carefully.

People who work with big data, meanwhile, are keenly aware of the problem of overfitting – more so it would appear then the designers of reductionist models which often have hundreds of parameters. Perhaps the ultimate example of such models is the dynamic stochastic equilibrium models used in macroeconomics. Studies show that these models have effectively no predictive value (which is why they are not used by e.g. hedge funds), and one reason is that key parameters cannot be determined from data so have to be made up (see The Trouble With Macroeconomics by Paul Romer, chief economist at the World Bank).

One reason we have tended to prefer mechanistic-looking models is that they tell a rational cause-and-effect story. When making a forecast it is common to ask whether a certain effect has been taken into account, and if not, to add it to the model. Business forecasting models may not be as explicitly reductionist as their counterparts in weather forecasting, biology, or economics, but they are still often inspired by the need to tell a consistent story. A disadvantage of models that come out of the complexity approach is that they often appear to be black boxes. For example the equations in a neural network model of the climate system might not tell you much about how the climate works, and sometimes that is what people are really looking for.

When it comes to prediction, as opposed to description, I therefore again agree with Michael Gilliland that a ‘defensive’ approach makes more sense. But I think the paradigm shift he describes is part of, or related to, a move away from reductionist models, which we are realising don’t work very well for complex systems. With this new paradigm, models will be simpler, but they can also draw on a range of techniques that have developed for the analysis of complex systems.

[1] Fildes, R., and N. Kourentzes. “Validation and forecasting accuracy in models of climate change.” International Journal of Forecasting 27 (2011): 968–995.

 

Create a free website or blog at WordPress.com.