I was reading a working paper from the Bank of International Settlements the other day – The “Austerity Myth”: Gain Without Pain? (published November 2011) and written by Roberto Perotti. The author can hardly be described as non-mainstream and has collaborated with leading mainstream authors in the past. His work with Harvard’s Alberto Alesina in the 1990s has been used by conservatives to justify imposing fiscal austerity under the guise that it would provide the basis for growth. In this current paper, Roberto Perotti tells a different story – one that has been ignored by the commentators who still wheel out his earlier work with Alesina as being the end statement on matters pertaining to fiscal austerity. In his current work, we learn that the conditions that allowed some individual nations in isolation to grow are not present now and that his current research casts “doubt on … the “expansionary fiscal consolidations” hypothesis, and on its applicability to many countries in the present circumstances”. Why don’t the conservatives quote from that paper?
On June 30, 2010, Business Week carried a story – Keynes vs. Alesina. Alesina Who? – which concluded that:
The bottom line Alesina has provided the theoretical ammunition fiscal conservatives want … Alberto Alesina is a new favorite of fiscal hawks like former President George W. Bush’s chief economic adviser, N. Gregory Mankiw. A professor of economics at Harvard University, the 53-year-old Italian disputes the need for more government spending to prop up growth and advocates spending cuts instead. This is Alesina’s hour. In April in Madrid, he told the European Union’s economic and finance ministers that “large, credible, and decisive” spending cuts to reduce budget deficits have frequently been followed by economic growth.
Alesina’s predictions in Madrid were made two years ago. The situation has deteriorated remarkably in Spain and elsewhere since then.
I analysed some of Alesina’s work in this blog – The deficit terrorists have found a new hero. Not!.
Roberto Perotti’s BIS paper was delivered to a conference in June 2011 – “Fiscal policy and its implications for monetary and financial stability” in Lucerne, Switzerland. The “event brought together senior representatives of central banks and academic institutions” and involved presentations and input from discussants. I will also reflect on Perotti’s discussant later.
Roberto Perotti notes that with budget deficits “have risen in virtually all countries, due to the recession and, in some cases, to bank support measures” and the way to move forward is now “a matter of bitter controversy”.
He juxtaposes the two conflicting opinions:
For some, governments should start reining in deficits now, even though most countries have not fully recovered yet; if done properly – namely, by reducing spending rather than by increasing taxes – budget consolidations are not harmful, and might indeed result in a boost to GDP. This is one interpretation of Alesina and Perotti (1995) and Alesina and Ardagna (2009) (AAP thereafter) … For others, this evidence on expansionary government spending cuts is flawed, and the aftermaths of a recession are the worst time to start a fiscal consolidation.
In particular, the author criticises the AAP approach (that is, he criticises his own earlier work) as well as more recent IMF work and says that “(b)oth approaches are also subject to the reverse causality problems that are almost inevitable with yearly data, and both lump together countries and episodes with possibly very different characteristics”.
He also agrees with the IMF criticism of the AAP research (that is, his earlier work), which:
… fails to remove important cyclical components, and that this failure can explain a spurious finding of expansionary budget consolidations.
As a way forward – and a way around the problems which bedevilled his earlier work – he says that “one can learn much from detailed case studies”. The paper presents “four, covering the largest, multi-year fiscal consolidations that are commonly regarded as spending based … Two of these episodes – Denmark 1982-86 and Ireland 1987-90 – were exchange rate based consolidations, while the other two – Finland 1992-98 and Sweden 1993-98 – were undertaken in the opposite circumstances, after abandoning a peg”.
He seeks to determine whether “there evidence that large budget consolidations, particularly those that are based mainly on spending cuts, have expansionary effects in the short run” and to determine “how useful is the experience of the past as a guide to the present”.
I will leave it to the reader to delve into the case studies in detail but the summary conclusions drawn from his case studies are as follows:
1. “Discretionary fiscal consolidations are often smaller than estimated in the past, and spending cuts are less important than is commonly believed … typically these consolidations relied on tax increases to a much larger extent than previously thought”.
2. “All stabilizations were associated with expansions in GDP. Except in Denmark … the expansion of GDP was initially driven by exports. Private consumption typically increased 6 to 8 quarters after the start of the consolidation. And as national source data (as opposed to OECD data that turned out to be incorrect) show, the expansion in what was probably the most famous consolidations of all – Ireland – turned out to be much less remarkable than previously thought”.
3. “Denmark relied on an internal devaluation via wage restraint and incomes policies as a substitute for a devaluation. It exhibited all the typical features of an exchange rate based stabilization: inflation and interest rates fell fast, domestic demand initially boomed; but as competitiveness slowly worsened, the current account started worsening, and eventually growth ground to a halt and consumption declined for three years. The slump lasted for several years”.
4. The Irish “government depreciated the currency before starting the consolidation and fixing the exchange rate within the European Exchange Rate Mechanism (ERM). Again wage restraint and incomes policies played a major role, but a key feature was the concomitant depreciation of the sterling and the expansion in the UK, that boosted Irish exports and contributed to reducing the nominal interest rate”.
5. “The two countries that instead floated the exchange rate while consolidating, Finland and Sweden, experienced large real depreciations and an export boom.”
6. “The budget consolidations were accompanied by large decline in nominal interest rates, from very high levels”.
7. “Wage moderation was essential to maintain the benefits of the depreciations and to make possible the decline of the long nominal rates”.
8. “Incomes policies were in turn instrumental in achieving wage moderation, and in signaling a regime shift from the past … However, the international experience suggests that incomes policies are effective for a few years at best”.
In this blog – Fiscal austerity – the newest fallacy of composition – I considered why export-led growth strategies, which are used to justify fiscal austerity in the Eurozone and the UK at present cannot apply for all nations which are simultaneously cutting back on their domestic demand.
The typical EMU policy strategy – scorch the domestic economy by undermining pension entitlements and the wages and conditions of the workers – and hope for an external boost is thus deeply flawed. One country might get away with it but not all countries.
The only reliable way to avoid a fallacy of composition like this is to maintain adequate fiscal support from spending while the private sector reduces its excessive debt levels via saving. That strategy is also likely to be the best one for stimulating exports because world income growth will be stronger and imports are a function of GDP growth.
Fiscal austerity not only undermines the rights and welfare of the citizens but are also undermining the source of the export revenue – domestic aggregate demand.
That is largely the message that Roberto Perotti’s paper confirms.
These results cast doubt on some versions of the “expansionary fiscal consolidations” hypothesis, and on its applicability to many countries in the present circumstances. A depreciation is not available to EMU members, except possibly vis a` vis non-Euro members. An expansion based on net exports is not available to the world as a whole. A further decline in interest rates is unlikely in the current situation. And incomes policies are not popular nowadays, and in any case probably ineffective for more than a few years.
Which leaves the case for fiscal austerity en masse – which is the current policy flavour – without much theoretical authority.
Of-course, when the facts get in the way of the theory the mainstream ideologues hastily declare the facts are wrong and get back to business as usual. The discussant for Roberto Perotti’s paper was University of Chicago economist Harald Uhlig – who thinks heterodox economists are practitioners of Mayan cosmology (see my blog – Sociopaths, closed minds and a bit of Mayan cosmology.
In relation to whether fiscal consolidation damaged output growth, he cited his own recent paper which:
… provides such an answer in a simple calibrated neoclassical growth model. That model lacks many features that may be crucial, but it does establish an important benchmark.
This is a standard mainstream ploy (deny the facts when they contradict the theory). Calibrated models are stylised, numerical models which means that the model-builder makes up some numbers and imposes them on a mathematical simulation structure.
In other words, they are designed to enumerate the underlying theoretical structure. They provide no additional information as to whether that theoretical structure is an accurate depiction of reality or provides any meaningful knowledge about how the actual monetary system operates.
In this case, Uhlig’s theoretical model is a standard Barro-type model where Ricardian equivalence is assumed – that is after some initial positive effect of a fiscal stimulus:
Pretty soon down the road, the effect on output is actually negative rather than positive, as the need to raise taxes to pay for the initial largesse kicks in.
Uhlig claims that in relation to his model (which lacks crucial features) “One can therefore flip the dynamics upside down for a hypothetical consolidation … that starts with a cut in government spending … There is an initial dip in output, but it is perhaps not as bad as many would fear. Moreover, it is followed by a subsequent rise in output down the road, as the stimulating effects of future tax cuts are felt”.
Note here the model in question says that tax rates have to rise to pay back the deficit, rather than tax revenue rising on the back of growth as the automatic stabilisers move in a counter-cyclical fashion.
It is the latter dynamic which characterises expansions typically and lead to lower budget deficits at the top of the boom relative to the trough. Governments rarely “raise taxes” to generate revenue to pay back past deficits. For example, in a downturn, a substantial component of the rise in the deficit is the lost tax revenue and increased welfare payments that are directly linked to the state of the cycle.
When the cycle moves back into a growth phase that component – the cyclical deficit – is eliminated.
Uhlig’s model is calibrated so that tax rate increases occur to ensure past flows of deficit spending are reversed. This sort of adjustment is not characteristic of the way fiscal adjustments occur.
Ricardian Equivalence claims that consumers assume that the increased government spending has to be paid back. This leads to the most basic neo-liberal lie of them all – that if governments cut their spending the private sector will fill the gap. Mainstream economic theory claims that that private spending is weak because we are scared of the future tax implications of the rising budget deficits.
But, the overwhelming evidence shows that firms will not invest while consumption is weak and households will not spend because they scared of becoming unemployed and are trying to reduce their bloated debt levels.
Calibrated models just generate numbers for predictable mathematical relationships. They are different from estimated models where the researcher uses statistical and econometric techniques to confront theoretical propositions (operationalised into a statistical equation) with real world data. In this case, the data might suggest that the theory is irrelevant.
For example, I might assume that the Earth is flat and the cliff where the boats fall off is 150 kms from where I am. I work out that a boat can travel at 10 kms an hour so my calibrated model will predict that the boat falls off the edge of the Earth some 15 hours after setting out from our destination. The prediction is a direct result of the structure of the model.
However, I might estimate using data various astronomic and geographic relationships which tell me that the Earth is not flat – the data rejects the theory as being an invalid description of the way the “world” works. In this case, I would seek a new explanation of the phenomenon and come up with the estimate that the boat will be sailing “around” the world and be 150 kms away from its initial destination.
When we think of a theoretical model it has often been claimed that the assumptions do not have to be “realistic” for the model to have predictive value. This was the approach spelt out in Milton Friedman’s famous 1953 article – The Methodology of Positive Economics – you can find the article HERE.
This is an oft-cited article when the mainstream claim that what they do is value free – Friedman said “Positive economics is in principle independent of any particular ethical position or normative judgments”.
But the classic idea in that article which the mainstream continually bring up when criticisms such as that noted by Davidson above are made is captured by Friedman in this way:
In so far as a theory can be said to have “assumptions” at all, and in so far as their “realism” can be judged independently of the validity of predictions, the relation between the significance of a theory and the “realism” of its “assumptions” is almost the opposite of that suggested by the view under criticism. Truly important and significant hypotheses will be found to have “assumptions” that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense) … The reason is simple. A hypothesis is important if it “explains” much by little, that is, if it abstracts the common and crucial elements from the mass of complex and detailed circumstances surrounding the phenomena to be explained and permits valid predictions on the basis of them alone. To be important, therefore, a hypothesis must be descriptively false in its assumptions …
So Friedman is extolling the virtues of the predictive power of a theory or hypothesis rather than how you get to make the predictions. If it turns out that the predictions are empirically sound then the theory is useful – which is a very instrumental way of conducting science.
According to Friedman, one should not evaluate a “theory” by determining whether the assumptions accord with reality. He even thinks that “wildly inaccurate assumptions” are okay if they lead to useful predictions.
I have long taught that models are always “false” in the sense that we wouldn’t know what is truth anyway if we confronted it. So I have sympathy for the view that a theory is as good as its ability to capture the key empirical outcomes and do better at that than competing theories using conventional statistical and econometric diagnostic tools as a guide to making that assessment of relative worth. I won’t go into that discussion here as it gets fairly technical (and boring).
So I agree that we should not judge a model’s usefulness on the simplicity or abstractness of its reasoning. But model building is also an iterative process whereby a researcher pushes out a conjecture and considers who well it encompasses the existing knowledge and the known facts. A conjecture that cannot help us understand the known facts (that is, makes predictions that are violated by the known facts) is not worth much in terms of its knowledge potential. So we are often making conjectures and seeing how they stack up – this is not falsification in the Popperian sense which is a fundamentally flawed conceptualisation of scientific endeavour.
Rather it is a iterative process where assumptions and structures might be varied if a conjecture is unsound (that is, cannot embrace the facts). Friedman and his cohort employed this approach as much as anyone which makes his strong viewpoint that the assumptions do not matter appear rather inconsistent. But that is an aside.
But there is a more substantive point about the assumptions used in a model. One always has to be cognisant of the actual assumptions that are required for the logical conclusions (and predictions) to hold. If they do not hold then the logical conclusions cannot then be maintained with any “authority”. Despite Friedman’s famous “what if” claim that we should only focus on the predictions even if the assumptions are plainly wrong, you can be almost certain that if the assumptions are crazy then the conclusions will be worse.
The point is that if a predictive structure relies on the set of assumptions to make the conjectures and if you cannot glean the same set of predictions if one or more of the assumptions are relaxed or clearly shown to be invalid (in terms of human behaviour etc) then the assumptions matter for the predictions.
The notion of Ricardian Equivalence falters on these grounds even before we examine its predictive capacity.
The modern version of Ricardian Equivalence was developed by Robert Barro at Harvard. For non-economists – this piece of neo-liberal dogma says that the non-government sector (consumers explicitly) having internalised the government budget constraint will negate any government spending increase whether the government “finances” its spending via taxes or borrowing. So if the government spends and borrows, consumers will anticipate higher future taxes and spend less now offsetting the stimulus).
The logic that the model is based on is as follows. First, start with the mainstream view that: (a) In the short-run, budget deficits are likely to stimulate aggregate demand as long as the central bank accommodates the deficits with loose monetary policy; and (b) in the long-run, the public debt build-up crowds out investment because it competes for scarce savings.
This view is patently false because deficits put downward pressure on the interest rate and central banks issue debt to stop that downward pressure from arresting control from them of their target interest rate. Please read the suite of blogs – Deficit spending 101 – Part 1 – Deficit spending 101 – Part 2 – Deficit spending 101 – Part 3 – for more discussion of that point.
Further, there is no finite pool of saving except at full employment. Income growth generates its own saving (investment brings forth its own saving) and governments just borrow back the funds (drain bank reserves) $-for-$ that the deficits inject anyway. Banks create deposits when they create loans not the other way around. Please read the following blogs – Building bank reserves will not expand credit and Building bank reserves is not inflationary – for further discussion of that point.
But let’s stick with the mainstream argument for the moment so we can understand what Ricardian Equivalence is about. Barro then said that the government does “our work” for us. It spends on our behalf and raises money (taxes) to pay for the spending. When the budget is in deficit (government spending exceeds taxation) it has to “finance” the gap, which Barro claims is really an implicit commitment to raise taxes in the future to repay the debt (principal and interest).
Under these conditions, Barro then proposes that current taxation has equivalent impacts on consumers’ sense of wealth as expected future taxes.
For example, if each individual assesses that the government is spending $500 this year per head and collects $500 per head “to pay for it” then the individual will cut consumption by $500 because they are worse off.
Alternatively, if the individual perceives that the government has spent $500 this year but proposes to tax him/her next year at such a rate that the debt will be cleared then the person will still be poorer over their lifetime and will probably cut back consumption now to save the money to pay the higher taxes.
So the government spending has no real effect on output and employment irrespective of whether it is “tax-financed” or “debt-financed”. That is the Barro version of Ricardian Equivalence. The models suggest that individuals assess the total stream of income and taxes over their lifetime in making consumption decisions in each period.
On tax cuts, Barro wrote (in ‘Are Government Bonds Net Wealth?’, Journal of Political Economy, 1974, 1095-1117):
This just means that lower taxes today and higher taxes in the future when the government needs to pay the interest on the debt; I’ll just save today in order to build up savings account that will be needed to meet those future taxes.
So what are the assumptions that Barro makes which have to hold in entirety for the logical conclusion he makes to follow? Note this is not to say that any of his reasoning is a sensible depiction of the basic operations of a modern monetary system. It just says that if we suspend belief and go along with him for the ride then the only way he can derive the predictions from his model that he does requires the following assumptions to hold forever.
Should any of these assumptions not hold (at any point in time), then his model cannot generate the predictions and any assertions one might make based on this work are groundless – meagre ideological statements. That is, one could not conclude that it was this particular model that was “explaining” the facts even if the predictions of the model were consistent with the facts.
This brings us into the problem of observational equivalence haunts modellers like me. It is basically a problem where two competing theories are “consistent” with the same set of facts and there is no way of disentangling the theories on empirical grounds. I won’t go into the technicalities of that problem.
As I have noted previously, the predictions forthcoming by those who adhere to the notion of Ricardian Equivalence rely on the following assumptions holding always.
First, capital markets have to be “perfect” (remember those Chicago assumptions) which means that any household can borrow or save as much as they require at all times at a fixed rate which is the same for all households/individuals at any particular date. So totally equal access to finance for all.
Clearly this assumption does not hold across all individuals and time periods. Households have liquidity constraints and cannot borrow or invest whatever and whenever they desire. People who play around with these models show that if there are liquidity constraints then people are likely to spend more when there are tax cuts even if they know taxes will be higher in the future (assumed).
Second, the future time path of government spending is known and fixed. Households/individuals know this with perfect foresight. This assumption is clearly without any real-world correspondence. We do not have perfect foresight and we do not know what the government in 10 years time is going to spend to the last dollar (even if we knew what political flavour that government might be).
Third, there is infinite concern for the future generations. This point is crucial because even in the mainstream model the tax rises might come at some very distant time (even next century). There is no optimal prediction that can be derived from their models that tells us when the debt will be repaid. They introduce various stylised – read: arbitrary – time periods when debt is repaid in full but these are not derived in any way from the internal logic of the model nor are they ground in any empirical reality. Just ad hoc impositions.
So the tax increases in the future (remember I am just playing along with their claim that taxes will rise to pay back debt) may be paid back by someone 5 or 6 generations ahead of me. Is it realistic to assume I won’t just enjoy the increased consumption that the tax cuts now will bring (or increased government spending) and leave it to those hundreds or even thousands of years ahead to “pay for”.
Certainly our conduct towards the natural environment is not suggestive of a particular concern for the future generations other than our children and their children.
If we wrote out the equations underpinning Ricardian Equivalence models and started to alter the assumptions to reflect more real world facts then we would not get the stark results that Barro and his Co derived. In that sense, we would not consider the framework to be reliable or very useful.
But we can also consider the model on the basis of how it stacks up in an empirical sense. When Barro released his paper (late 1970s) there was a torrent of empirical work examining its “predictive capacity”.
It was opportune that about that time the US Congress gave out large tax cuts (in August 1981) and this provided the first real world experiment possible of the Barro conjecture. The US was mired in recession and it was decided to introduce a stimulus. The tax cuts were legislated to be operational over 1982-84 to provide such a stimulus to aggregate demand.
Barro’s adherents, consistent with the Ricardian Equivalence models, all predicted there would be no change in consumption and saving should have risen to “pay for the future tax burden” which was implied by the rise in public debt at the time.
What happened? If you examine the US data you will see categorically that the personal saving rate fell between 1982-84 (from 7.5 per cent in 1981 to an average of 5.7 per cent in 1982-84).
In other words, Ricardian Equivalence models got it exactly wrong. There was no predictive capacity irrespective of the problem with the assumptions. So on Friedman’s own reckoning, the theory was a crock.
Once again this was an example of a mathematical model built on un-real assumptions generating conclusions that were appealing to the dominant anti-deficit ideology but which fundamentally failed to deliver predictions that corresponded even remotely with what actually happened.
Barro’s RE theorem has been shown to be a dismal failure regularly and should not be used as an authority to guide any policy design.
Please read my blog – Deficits should be cut in a recession. Not! – for more discussion on this point.
The more recent laboratories in the Eurozone and the UK are demonstrating categorically that firms and households do not act in Ricardian ways. Consumers fear unemployment and refuse to spend when fiscal austerity is imposed and firms will not invest when consumers are not spending.
Harald Uhlig finished his commentary with this:
Increasingly, though, governments have little choice but to consolidate, or else face the wrath of markets. And consolidation often is wise in the medium-to-long run. Perhaps, then, the short-run is not all that important, except that politicians should refrain from overselling potential short-run benefits.
In other words, the damage one causes in the “short-run” (how long is that) is “not all that important”. The current crisis has endured for more than four years now and unemployment those nations that are under the self-imposed attack from fiscal austerity is rising. More than 50 per cent of Spanish youth are unemployed. Approaching one-quarter of the overall Spanish labour force is unemployed and Greece is in a similar position.
But that is “not all that important” because Dr Uhlig has made up a model, put some numbers on some equations which reflect his defunct theory, then “flipped the dynamics” to “prove” that in the long-run all will be well.
A disgrace masquerading as preposterousness.
The mainstream financial press have been keen to quote Alesina and Perotti (1995) and related publications in the 1990s which purported to show how nations that engaged in fiscal contraction at a time when economic growth was faltering were able to recover. These article are used to justify the fiscal austerity now being imposed at massive cost in many nations.
However the same commentators have not seen fit to quote or refer to Perotti’s 2011 research which demonstrates that the conditions that might have allowed some nations (in isolation) to successfully grow during a period of fiscal consolidation are not present now in Europe or elsewhere and so fiscal austerity will only cause damage.
Why are the conservatives so selective in their citations? No need to answer – we all know it.
I have an afternoon of meetings ahead.
That is enough for today!