Two things led to this blog today. First, the IMF has once again been lecturing the world on economic policy. In the Global Financial Stability Report and the World Economic Outlook Update – both released yesterday (July 16, 2012) the IMF has downgraded their growth forecasts again yet is hanging on to the myth that austerity is the path to resolution and that the deficit reductions underway are appropriately growth supporting. Doesn’t anyone in the IMF understand logic? One cannot on the one hand admit that growth is falling below previous forecasts yet on the other hand claim that policy which caused growth to slump is growth supporting. Second, Anna Schwartz died in New York on June 21, 2012. The two events can be linked.
The EUObserver article (July 16, 2012) – IMF tells eurozone to turn on printing presses – is representative of the press reaction to the latest IMF reports, which show how compromised the IMF has become. I might consider the latest IMF World Economic Outlook tomorrow in more detail. But the motivation today is the continued belief that monetary policy matters.
They quote the IMF as saying in the WEO:
There is room for monetary policy in the euro area to ease further. In addition, the ECB should ensure that its monetary support is transmitted effectively across the region and should continue to provide ample liquidity support to banks under sufficiently lenient conditions … The utmost priority is to resolve the crisis in the euro area …
At the outset of the crisis in early 2008 the major policy reaction was to prompt central banks to drop interest rates and then engage in quantitative easing and other so-called non-standard monetary policy initiatives (swaps arrangements etc.)
Please read my blog – Quantitative easing 101 – for more discussion on this point.
The fiscal policy responses came a bit later and clearly were reluctant innovations.
This prioritising of monetary policy stems from the Monetarist era (1970s and onward) where the profession abandoned the dominant Keynesian macroeconomic paradigm in favour of
In my recent book with Joan Muysken – Full Employment abandoned we cover this paradigm shift in some detail. Specifically we show that Milton Friedman’s work unambiguously aimed to build on the early research of Irving Fisher and was up against a new macroeconomic orthodoxy in the 1950 – Keynesian thinking.
By the 1920s, Irving Fisher was setting the groundwork for what became Monetarism some 42 years later. The work of Fisher was obscured by the rise of Keynesian macroeconomic orthodoxy.
Friedman and others were working on the foundations of a resurgence of Neoclassical macroeconomics based on the Quantity Theory of Money during the 1950s and 1960s. The Monetarist reinterpretation of the trade-off between unemployment and inflation, which emphasised the role of expectations, revived the Classical (pre-Keynesian) notion of a natural unemployment rate (defined as equivalent to full employment). The devastating consequence was the rejection of a role for demand management policies to limit unemployment to its frictional component.
They recast the Phillips curve (the relationship between inflation and unemployment) to be a relationship where mistakes in price expectations drove real shocks (via supply shifts) rather than the way the Keynesians constructed the relationship – real imbalances (excess labour supply – that is, unemployment) driving the inflation process (via demand shocks).
The two approaches are not the slightest bit similar and constitute two separate paradigms (or philosophical enquiries) although one cannot cast it as a Kuhnian shift (see below).
The importance of this shift in macroeconomic thinking after the OPEC oil shocks to Monetarism was that it scorned aggregate demand intervention to maintain low unemployment. Any unemployment rate was optimal and the a reflection of voluntary, utility-maximising choices. The policy emphasis shift from full employment to full employability and the period of active labour market programs began in earnest.
The rise in acceptance of Monetarism and its New Classical counterpart was not based on an empirical rejection of the Keynesian orthodoxy, but in Alan Blinder’s words:
… was instead a triumph of a priori theorising over empiricism, of intellectual aesthetics over observation and, in some measure, of conservative ideology over liberalism. It was not, in a word, a Kuhnian scientific revolution.
The stagflation (coincidence of inflation and unemployment) in the 1970s (the so-called shift in the Phillips curve) associated with the OPEC ructions led to a view that the OECD economies were failing and provided a strong empirical endorsement for the Natural Rate Hypothesis, despite the fact that the instability came from the supply side.
Any Keynesian remedies proposed to reduce unemployment were met with derision from the bulk of the profession who had embraced the new theory and its policy implications. The natural rate hypothesis now became the basis for defining full employment, which then evolved to the concept of the NAIRU.
Please read my blog – The dreaded NAIRU is still about! – for more discussion on this point.
The inflation was interpreted by Monetarists to be a vindication of the classical Quantity Theory of Money (QTM), which draws a relationship between the growth in the money supply and the rate of inflation.
The QTM is written in symbols as MV = PQ. This means that the money stock (M) times the velocity of money – the turnover of the money stock per period (V) is equal to the price level (P) times real output (Q). The mainstream assume that V is fixed (despite empirically it moving all over the place) and claim that Q is always at full employment as a result of free market adjustments.
So this theory denies the existence of unemployment. The more reasonable mainstream economists admit that short-run deviations in the predictions of the Quantity Theory of Money can occur but in the long-run all the frictions causing unemployment will disappear and the theory will apply.
So by claiming that V and Q are fixed, it becomes obvious that changes in M cause changes in P – which is the basic Monetarist claim that expanding the money supply is inflationary. They say that excess monetary growth creates a situation where too much money is chasing too few goods and the only adjustment that is possible is nominal (that is, inflation).
Given that the central bank was deemed responsible for the growth of the money supply the conclusion was simple.
Governments (central banks) were to blame for inflation because they were too busy “printing money” to try to keep unemployment low (lower than the supposed but mythical NAIRU) and that they should adopt a monetary targetting rule to provide certainty.
Please read my blog – Central bank independence – another faux agenda – for more discussion on this point.
One of the contributions of Keynes was to show the Quantity Theory of Money could not be correct. He observed price level changes independent of monetary supply movements (and vice versa) which changed his own perception of the way the monetary system operated.
Further, with high rates of capacity and labour underutilisation at various times (including now) one can hardly seriously maintain the view that Q is fixed. There is always scope for real adjustments (that is, increasing output) to match nominal growth in aggregate demand. So if increased credit became available and borrowers used the deposits that were created by the loans to purchase goods and services, it is likely that firms with excess capacity will respond by increasing real output to maintain market share.
Moreover, as I explain in this blog – Money multiplier and other myths and these blogs – Building bank reserves will not expand credit and Building bank reserves is not inflationary – the central bank cannot control the money supply anyway.
Which brings me to the death of Anna Schwartz. The New York Times ran an obituary on the day of her death (June 21, 2012) – Anna Schwartz, Economist Who Collaborated With Friedman, Dies at 96.
It said that she was “a research economist who wrote monumental works on American financial history in collaboration with the Nobel laureate Milton Friedman while remaining largely in his shadow …”
It said that she was referred to as the:
“high priestess of monetarism,” upholding a school of thought that maintains that the size and turnover of the money supply largely determines the pace of inflation and economic activity.
So the QTM.
The NYT says that:
The Friedman-Schwartz collaboration “A Monetary History of the United States, 1867-1960,” a book of nearly 900 pages published in 1963, is considered a classic. Ben S. Bernanke, the Federal Reserve chairman, called it “the leading and most persuasive explanation of the worst economic disaster in American history.”
The authors concluded that policy failures by the Fed, which largely controls the money supply, were one of the root causes of the Depression.
So you can see that current monetary policy is probably still being influenced by this work and certainly a substantial component of the hyperinflation scaremongering is influenced by the work.
I have taught and used in my research advanced time series econometric techniques for more than 25 years now and was part of the David Hendry revolution as a graduate student in the early 1980s This approach – known as the general-to-specific approach – demonstrated that much of the time series work prior to the late 1970s was invalid and should be discregarded.
The roots of this new approach to econometric modelling emerged out of discontent in the 1970s with the state of art.
Scepticism was growing in the 1970s that the traditional – specific-to-general – techniques were delivering useless results. For example, LSE econometrician Meghnad Desai wrote in his 1976 Applied Econometrics book (page vii) that:
Even within the academic profession, one is sensing a doubt as to whether the generation of more numbers for their own sake is fruitful. The ad hoc approach of many practising econometricians to the problem of hypothesis testing and inference is illustrated by the popular image of much econometrics as a high R2 in search of theory. Garbage in-garbage out is ow many describe their own activity.
For the non-specialist the R2 was a summary statistical measure of how well the estimated model “fitted” the actual data. With the advent of modern computing, researchers could run millions of econometric models – data mining – and come up with almost anything that they wanted.
There was a lack of awareness to engage in transparent model selection techniques and replication. Some referred to it as an exercise in eCONometrics. The task was to maximise the R2 even if the model was deeply flawed.
The problems escalated and reached a peak in the late 1970s when the inflationary outburst associated with the OPEC oil price hikes caused major forecasting errors in the macroeconometric models that had been developed by central banks, treasuries and other bodies (some private consulting firms etc).
It seemed that these models, which were extremely expensive to develop (some with thousands of equations) were useless and could be outperformed by single equation univariate processes. That is, a model based on past values of some variable in question. No theory – just inertia.
The impact of the evolving inflation was significant because it exposed fatal specification flaws in the extant models. The fact that they were mis-specified (including omitting key variables) was not discovered until one such missing, but important variable – the inflation rate started to exhibit a non-zero variance.
That is, a model can forecast well even though it is mis-specified as long as say the source of the mis-specification is not doing anything. So in the case of an omitted variable – this will not be an issue until that particular variables starts to move around. Then the flaws are exposed. That happened to the major consumption functions, for example, in the late 1970s.
The general-to-specific approach says that we don’t really know what the exact specification is and we need to search for it – including dynamic effects (time lags). So we start with a very general model of the process we are interested in and then perform a sequence of rigourous statistical tests on the model’s performance which then allows us to simplify the specification (eliminate variables or lags of variables that are of no consequence) until we achieve a parsimonious representation of the data.
The resulting model is false by definition because all models are false. We can never know the truth. How would we know it if we found it. The purpose of econometric modelling then is not to discover truth but to generate tentatively appropriate representations of the data that outperform all existing representations.
The specific-to-general model that characterised the traditional approach – including that used by Friedman and Schwartz – started with a very narrow specification – usually a static relationship between a few variables and then added variables in no systematic way as the modelling exercise continued with the aim of maximising the R2 statistic.
As Hendry et al. showed this was a very unsatisfactory way of proceeding.
In the past, when I have taught advanced econometrics to fourth-year students I have engaged in an exercise where we try to replicate some famous (influential) econometric study from first principles. The data is readily available and we usually found that the results of the study could not be replicated – usually because the model selection methodology was not transparent.
That meant that we could not work out how the researcher achieved the final form of the published equation because their reporting standards were lax.
More importantly, when the modelling was performed using the general-to-specific techniques on the same data, typically a very different model would emerge which would not support the theoretical proposition being countenanced (pushed) by the researcher.
Econometrics in the bad-old days was used to push a particular proposition from economic theory. So the orthodox economists would dodgy up some econometric model and conclude that their theory was right.
Friedman and Schwartz did just that.
In this 2004 paper from Econometric Theory – The ET Interview: Professor David F. Hendry – econometrician Neil Ericsson, Hendry’s former research officer and now a senior economist in the US Federal Reserve system, discusses all manner of topics with David Hendry, including the Friedman-Schwarz scandal.
Neil Ericsson said:
In 1982, Milton Friedman and Anna Schwartz published their book Monetary Trends in the United States and the United Kingdom, and it had many potential policy implications. Early the following year, the Bank asked you to evaluate the econometrics in Friedman and Schwartz (1982) for the Bank’s panel of academic consultants …
This led to the 1983 Ericsson and Hendry paper – a version of which you can access as – An Econometric Analysis of UK Money Demand in Monetary Trends in the United States and the United Kingdom by Milton Friedman and Anna J. Schwartz. It took
Hendry replied to Ericsson by saying that:
Friedman and Schwartz’s approach was deliberately simple-to-general, commencing with bivariate regressions, generalizing to trivariate regressions, etc. By the early 1980s, most British econometricians had realized that such an approach was not a good modeling strategy. However, replicating their results revealed numerous other problems as well.
To which Ericsson noted “I recall that one of those was simply graphing velocity.”
In the Interview they reproduced a graph that shows how devious Friedman and Schwartz were in trying to push the Monetarist line. I reproduce it as the next graph.
Why is this important?
Think QTM! The theory of inflation relied on three things. First, that the central bank can control the money supply. Second, that there was continual full employment so the only way the economy can respond to nominal demand growth (MV) is via price rises and if MV accelerates there will be inflation. Third, that the velocity of circulation was stable.
Hendry told Ericsson that:
The graph in Friedman and Schwartz … made UK velocity look constant over their century of data. I initially questioned your plot of UK velocity–using Friedman and Schwartz’s own annual data–because your graph showed considerable nonconstancy in velocity. We discovered that the discrepancy between the two graphs arose mainly because Friedman and Schwartz plotted velocity allowing for a range of 1 to 10, whereas UK velocity itself only varied between 1 and 2.4.
The abuse and mis-use of scales on graphs.
But it went further. The formal econometric evaluation by Hendry, Ericsson and others at the time showed the work to be nonsense and misleading.
Hendry went on:
Testing Friedman and Schwartz’s equations revealed a considerable lack of congruence. Friedman and Schwartz phase-averaged their annual data in an attempt to remove the business cycle, but phase averaging still left highly autocorrelated, non-stationary processes.
Which in English says their estimated equations were rubbish.
Remember that this work was being published and promoted by Friedman and Schwartz at the peak of Margaret Thatcher’s reign in the UK. Central banks around the world had fallen for the Monetarist kool-aide and had imposed regimes called monetary targetting.
This involved the central bank announcing that it would target an x per cent growth in the money supply which translated into saying (because they claimed V was constant) into a nominal GDP growth rate of x per cent. Accordingly, if they wanted real growth to be 2 per cent and inflation to be 2 per cent, then x would be 4 per cent.
As Hendry puts it in the interview:
Margaret Thatcher – the Prime Minister – had instituted a regime of monetary control, as she believed that money caused inflation, precisely the view put forward by Friedman and Schwartz. From this perspective, a credible monetary tightening would rapidly reduce inflation because expectations were rational. In fact, inflation fell slowly, whereas unemployment leapt to levels not seen since the 1930s. The Treasury and Civil Service Committee on Monetary Policy (which I had advised in … had found no evidence that monetary expansion was the cause of the post-oil-crisis inflation. If anything, inflation caused money, whereas money was almost an epiphenomenon. The structure of the British banking system made the Bank of England a “lender of the first resort,” and so the Bank could only control the quantity of money by varying interest rates.
The UK Guardian ran a front-page editorial at the time (on December 15, 1983) – entitled “Monetarism’s guru ‘distorts his evidence'” which drew on an article in the body of the newspaper by one Christopher Huhne which “summarized–in layman’s terms” the critique by Ericsson and Hendry of the work by Friedman and Schwartz.
Friedman, in turn, was furious and wrote to Hendry requesting he disassociate himself from the Guardian article.
In the book by J.D. Hammond (1996) Theory and Measurement: Causality Issues in Milton Friedman ‘Monetary History, published by Cambridge University Press (page 199) you see the letter that Hendry wrote to Friedman on July 13, 1984. In part it said:
… if your assertion is true that newspapers have produced ‘a spate of libellous and slanderous’ articles ‘impugning Anna Schwartz’s and … [your] … honesty and integrity’ then you must have ready recourse to a legal solution.
Friedman never sued!
As Hendry notes in the interview:
One of the criticisms of Friedman and Schwartz was that is was “unacceptable for Friedman and Schwartz to use their data-based dummy variable for 1921—1955 and still claim parameter constancy of their money-demand equation. Rather, that dummy variable actually implied nonconstancy because the regression results were substantively different in its absence. That nonconstancy undermined Friedman and Schwartz’s policy conclusions.
As an aside it took Hendry and Ericsson eight years to get their working paper published in the American Economic Review after what they referred to as a “a prolonged editorial process”. Monetarism was dominant and the high-priests were clearly exerting as much pressure on editors of major orthodox journals to suppress research that was undermining the mainstream anti-government message.
Friedman and Schwartz has claimed (page 624) that their UK money demand model was constant (that is, stable in its estimated parameters);
… more sophisticated analysis … reveals the existence of a stable demand function for money covering the whole of the period we examine.
Stable equations are essential in econometrics because they can then be used for prediction. If the estimated relationship between X and Y is not stable then you cannot conclude with confidence that if you change Y you will get a predictable response in X.
Friedman and Schwartz (FS) knew that if they wanted to promote Monetarism (and they were the key promoters) then they had to produce stable equations – by hook or by crook (mostly crook).
What Hendry and Ericsson (HE) found is that FS did “not formally test for constancy, and many investigators would regard the need for the data-based shift dummy … spanning one-third of the sample as prima facie evidence against the model’s constancy.”
This refers to an ad hoc variable that FS used to ensure the equation looked constant. There was not theoretical reason to include such a “shift dummy”. The term dummy is apposite because it means there is no meaning to the variable. It is just a statistical fix to achieve some result. Sometimes a dummy has meaning (say when a major event occurs like an earthquake or a sudden policy innovation). But often it is just a fudge.
As HE showed – it was certainly a fudge in the FSs models. They wrote that FSs models were not constant once the evidence was properly considered and that the:
… inferences which Friedman and Schwartz draw from their regression would be invalid … [due to biases]
They also report a range of problems with the FS models.
In conclusion they say:
Taking this evidence together … [the FS preferred model] … is not an adequate characterisation of the data and is not consistent with the hypothesis of a constant money-demand equation … none of the relevant hypotheses could have been tested by Friedman and Schwartz … without their having obtained a rejection …
So overall the claim by FS that the money demand was constant which means that velocity of circulation was constant was rejected outright.
HE conclude that “at the heart of model evaluation are issues of model credibility and validity and the role of corroborating evidence”. They concluded that the work of FS was “lacking in credibility” and the evidence “inflations many of their infererences”.
Most nearly all the FS claims were rejected.
The point is obvious. There is no substantial support for the mainstream macroeconomics model. It has failed over and over to accord with events in the real world.
There is an historical litany of dodgy econometric studies that have been used to justify the ideological hatred for government intervention.
Fortunately, some of them have been grandly exposed – as in the case of Friedman and Schwartz. More often, however, they are not exposed as the frauds that they are.
The problems still resonate today because the Monetarist legacy – prioritising monetary policy and eschewing fiscal policy – still dominates the public debate. The world is worse of as a consequence. There is no grounds for this policy bias. Just pure ideology clouded by a smokescreen.
I wonder if Anna Schwartz ever reflected on her own dubious contribution to the debate.
That is enough for today!
(c) Copyright 2012 Bill Mitchell. All Rights Reserved.