A new breed of economics graduates is needed (did I say desperately)

There were two interesting articles I read (among others) in recent days that attack mainstream economic analysis in different ways. The first, published August 18, 2013 – Removing deadweight loss from economic discourse on income taxation and public spending – is by Northwestern Economics Professor Charles F. Manski. He wants our profession to dump all its negative welfare analysis about the impacts of taxation (as “deadweight losses”) and, instead, focus on the benefits that come from government spending, in particular, highly productive infrastructure provision. So, an attack from the inside! The second article was a Bloomberg Op Ed (August 21, 2013) – Economists Need to Admit When They’re Wrong – by the theoretical physicist, Mark Buchanan, who has taken a set against my profession in recent years. Not without justification and with some panache, one should add. They both add up to the same conclusion – mainstream economics is defunct and we should decommission teaching programs throughout the world and introduce new progressive approaches to the discipline that will produce a new bread of useful Phd graduates, rather than the moribund graduate classes that get rubber-stamped out of our higher education institutions, ad nauseum, at present.

Charles Manksi article is an interesting twist. He clearly sees the “anti-tax rhetoric evident in much lay discussion of public policy” as being informed by the “prevalent negative language of professional economic discourse”.

He notes that my economists “regularly write about the ‘inefficiency’, ‘deadweight loss’, and ‘distortion’ of income taxation”, which, of-course, stem from the textbook models they deploy and the pedagogy they use and pass on to students.

It is a self-fulfilling dynasty – students are imbued with an anti-government zeal from day one of their economics courses because they are forced to believe that the so-called model of perfect competition is an applicable benchmark of welfare optimality and that government policies – by construction – lead to departures from that utopia.

The language is then all value-laden – these departures are bad – because the benchmark is good. Students are lead to believe these departures are inefficient – which is taken to mean wasteful, destructive and loss-incurring.

Efficiency is the perfectly competitive outcome (except for the concession that in some rare cases there is so-called “market failure” which justifies public intervention). How pervasive might market failure be? No real response – sometimes but clearly rarely.

Charles Manski quotes Harvard academic, Martin Feldstein from 1999:

The traditional method of analysing the distorting effects of the income tax greatly underestimates its total deadweight loss as well as the incremental deadweight loss of an increase in income tax rates.

I provided my opinion of Feldstein in this blog – Martin Feldstein should be ignored. Feldstein was one of the economists featured in the 2011 investigative movie – Inside Job – which the Director Charles Ferguson said was about “the systemic corruption of the United States by the financial services industry and the consequences of that systemic corruption.”

Charles Manski writes in relation to the standard “welfare economics” which provide so-called mathematical proofs that government intervention (taxation in this case) is bad that:

… prominent applied public economists continue to take the theory quite seriously … [and that] … The Feldstein article and similar research on deadweight loss appear predestined to make income taxation look bad … It focuses attention entirely on the social cost of financing government spending, with no regard to the potential social benefits.

A recurring theme relates to whether the public really understand the benefits of government service and infrastructure delivery. Even with a mainstream framework there is virtually no emphasis on these benefits.

From a Modern Monetary Theory (MMT) perspective, the idea that taxation funds government spending is inapplicable. Please read my blog – Taxpayers do not fund anything – for more discussion on this point.

But from a mainstream framework, which pretends to evaluate resource usage on the basis of costs and benefits, the scarcity of research on the benefits of government spending relative to the so-called costs of taxation is telling.

I used to ask students to imagine a world where everything was private and from the moment you left your front gate in the morning to the time that you are right back there in the evening. In this world, all transitions would have to be negotiated via contracts with private providers. Imagine, the possibility that each streetcorner you had to negotiate a price and other characteristics to enter the next stage of your journey. A trivial example but it will only get more complex from there.

But Charles Manski’s point is worth thinking about. He says that:

If applied economists are to contribute fair and balanced analysis of public policy … it essential that we jointly evaluate taxation and public spending within a framework that restricts attention to feasible tax instruments and that makes a reasonable effort to approximate the structure of the actual economy.

He then specified how that might be done drawing on the early 1970s work of James Mirlees. Of this work, we learn that value-laden concepts such as “inefficiency, deadweight loss, and distortion”, which prejudice our thinking against government spending were absent and taxation and spending were evaluated in a consistent framework to consider net outcomes.

While I won’t go into considerable detail on Charles Manski’s recommendations relating to a social-welfare function approach as the “only normative concept required for evaluation of public policy” his main point can be understood when he says:

… the essential feature of my research is the transparent way that it characterises how public spending on infrastructure may enhance private productivity … [and that] … Increasing the tax rate is socially beneficial if and only if the additional infrastructure spending enabled by the tax rise yields a more than commensurate increase in wages.

So while his work remains firmly in the mainstream paradigm it is clearly recommending a break from the manic adherence to the perfectly competitive, welfare economics that dominate public policy making.

I wonder what Charles Manski would think if he dropped “the use of tax revenue to finance public spending on infrastructure” as his starting point?

Once one jettisons that flawed assumption the rest of the analytical framework would also fall apart. But it is salutary than mainstream economist recognises the loaded terminology of the mainstream approach, which distorts the public debate and leads to poor public policy.

The importance of his observation is that most of the many deficit terrorist rhetoric that journalists love to promote is fed from economists who Manski is criticising.

Even within the mainstream approach – characterised by the assumption that taxes fund government spending – the deficit terrorists we have no oxygen if they heeded the words of Charles Manski.

The second article, as noted, is not an insider’s contribution. Mark Buchanan, a theoretical physicist has written a number of highly critical and, at times, humorous articles about the mainstream of my profession.

In this article, he notes that “scientific activity” is not about proving one’s theories are right (the so-called truth pursuit that belongs in religion) but rather:

… what matters most is figuring out what’s wrong — an endeavor in which the economics profession has been failing spectacularly.

I have long taught my students in econometrics that, at best, all they are doing is establishing that the current conjecture is both data-consistent and, hopefully, the best explanation currently around.

The words tentatively adequate are an apt description of how students should think of their applied results.

Mark Buchanan quotes a fellow-physicist:

Unscientific ideas, by contrast, have a bloblike ability to conform to any set of facts. They are difficult to prove wrong, and so don’t teach us much.

He notes that it was a common feeling in the early days of the GFC that mainstream “economic thinking” was profoundly wrong (“profound errors”) and that notions such as the “wonderful efficiency and inherent stability of modern markets, all supposedly supported by volumes of sophisticated mathematics” could be “finally jettison(ed)” due to their deep flaws and new ideas could emerge.

I thought that too and have been somewhat amazed at the capacity of the public to continue accepting to hogwash from my colleagues. How many more times will they be wrong?

Mark Buchanan correctly notes that:

The dominant paradigm in macroeconomics recovered remarkably quickly, leading one to wonder if any conceivable turn of events could falsify the prevailing faith. Much of academia went into complete denial, while some people suggested that a few minor tweaks may be necessary to put things back on track.

He cites an example of recent research that is at the more unbelievable end of the rubbish that comes out every day from the mainstream economists.

This article – Inflation in the Great Recession and New Keynesian Models – from some economists at the Federal Reserve Bank of New York is a classic!

Only read it if you are good at mathematics and want to waste 30 minutes of your life.

Its brief is to resurrect the standard DSGE model, which failed dramatically to predict the crisis. They accomplish this by adding what they call “financial frictions” and then add some variations on the standard “marginal prior distributions” that the previous DSGE models used – what they call a “looser prior, one that has less influence on the model’s forecasting performance”.

They also “fix parameters” to make the theoeretical model tractable as a state-space representation.

In English? The original made up priors (beliefs about model parameters) clearly give shocking forecasting outcomes so they had to loosen them up a bit to get better forecasting performance.

In other words, fudge after fudge with no theoretical or behavioural justification provided for the values they chose.

I have spent many years estimating econometric models and I would like to think I know all the tricks and all of the ways in which specific desired results can be produced from a given dataset.

The capacity to produce whatever is desired increases in a DSGE framework because of all of the numerical priors that can be imposed on the model solution and produced results.

Best practice would tell us that they should be solid grounds for imposing any prior and that the process of model selection, estimation, and forecasting should be entirely transparent and able to be replicated by anyone who has access to the same data.

I doubt very much whether I could replicate the results of this paper very easily. That is not because the techniques used are overly complex or that the data is not available. It is just that the fudge is entirely opaque.

Please read my blog from 2009 – Mainstream macroeconomic fads – just a waste of time – for more discussion on DSGE modelling and New Keynesian economics.

Mark Buchanan says of this paper:

… an economic research team announced that after several years of determined effort, they had found a way that standard theory could explain the aftermath of the crisis after all. They managed, with enough tinkering in the workshop, to hammer one of the profession’s beloved mathematical models — known as a dynamic stochastic general equilibrium model — into a form that could produce something crudely like the 2008 financial meltdown and ensuing recession.

He wonders what motivates “such desperate efforts at rationalization”.

The answer he provides (from the work of economic historian Philip Mirowski) is that any departure from this sort of mdeolling would undermine our status is society:

Much of economists’ authority stems from their claims to insight on which policies will make people better off. Those claims arise from core theorems of mathematical economics — known as welfare theorems — which in turn depend on some wildly implausible assumptions, such as the idea that people are perfectly rational and make decisions with full awareness of all possible futures.

In my blog cited above (Mainstream macroeconomic fads – just a waste of time) I note that the theoretical models that New Keynesian economists build are incapable of dealing with the real world and so ad hoc responses to empirical anomaly quickly enter the fray.

But trying to build real world characteristics (such as a lagged dependence between output and inflation) into their models from the first principles that they start with is virtually impossible.

No New Keynesian economist has picked up this challenge, and instead they just modify their models with a number of arbitrary (empirically-driven) additions.

But the point is that once they modify their theoretical models to include some empirical facts the so-called “desirable” welfare properties of the theoretical models disappear.

So like most of the mainstream body of theory they claim virtue based on so-called microeconomic rigour but respond to anomalies that are pointed out when that “rigour” fails to deliver anything remotely consistent with reality, with ad hoc (non rigourous) tack ons.

So at the end of the process there is no rigour at all – using rigour in the way they use it which is, as an aside, not how I would define tight analysis.

Mark Buchanan correctly notes therefore:

If economists used more realistic assumptions, the theorems wouldn’t work and claims to any insight about public welfare would immediately fall apart. Take a few tiny steps from mathematical fantasy into reality, and you quickly have no theory at all, no reason to think the market is superior to alternatives. The authority of the profession goes up in a puff of smoke.

This is a point not often understood. The real world is nothing much like the theoretical world that mainstream economists hide out in collecting their pay in secure jobs. Talk about inefficiency and unproductive pursuits!

In this blog – Defunct but still dominant and dangerous – I introduced the work of the late Kelvin Lancaster, who was an Australian economist who like many of my profession ventured to the US to graduate school because that was increasingly thought to be where it was at! Cultural and ideological cringe mostly.

In 1956 two economists (Richard Lipsey and Kelvin Lancaster) came up with a very powerful new insight which was called the Theory of the Second Best. This was, in fact, a devastating critique of mainstream welfare economics.

You won’t hear much about the theory any more because, like all the refutations of mainstream theory, it got swept under the dirty neoclassical carpet and economics lecturers using homogenised, ideologically-treated textbooks continue blithely as if nothing happened.

In English, the Theory of Second Best basically says that if all the assumptions of the mainstream theory do not hold in a particular situation, then trying to apply the results of the theory in that case is likely to make things worse not better.

So “if one optimality condition in an economic model cannot be satisfied, it is possible that the next-best solution involves changing other variables away from the ones that are usually assumed to be optimal” (Source).

This is very applicable to the use of the model of perfect competition which requires several assumptions to be valid (for example, perfect information, perfect flexibility of prices, perfect foresight, no market power being wielded by firms, workers or anyone etc) for the main theoretical insights (results) to have validity.

Virtually none of the required assumptions apply in the real world.

Economists often say that governments should try to dismantle real world “rigidities” (as they call them) so that they can move the economy closer (but not to) perfect competition – because in their fantasy comic book texts this is the ideal state.

The Theory of Second Best tells us that if you do not have that “ideal state” and you dismantle one so-called “rigidity” but leave others then you can make things worse off.

The other point is that often it is “better” for governments to introduce new “rigidities” to confront existing departures from perfect competition.

The point is that the theory of second-best destroys the capacity of the mainstream economists to use “perfect competition” models as an authority in the policy debate. The text book models have no legitimacy in the policy domain.

That is why you won’t read much about it in the newspapers or other media where economic policy is discussed.

One of my influences is, as I have noted in the past, the great Polish economist Michal Kalecki.

In the 2010 book – Great Thinkers in Economics: Michal Kalecki – (By Julio López and Michaël Assous) published by Palgrave Macmillan, we read that as a highly skilled mathematician, who turned his interest to economics, Kalecki warned of the misuse of mathematics in economic analysis.

He is quoted as saying (Page 2):

… you should know that you must never use mathematics when you can say the same thing in a simpler way, in common language …

He also was well aware (in the 1930s) of the failures of the textbook models that still dominate today.

In Joan Robinson’s Collected Economic Papers II (page 241) we read:

The general discontent with the complacency of text-book economics found its main expression in Keynes’ General Theory, and the theory of employment was, of course, far more important, both for analysis and for policy, than anything concerned with the theory of individual prices. Keynes himself was not much interested in price theory, but the two streams of thought were combined by Michal Kalecki.

[Reference: Robinson, J. (1960) Collected Economic Papers II, Oxford University Press, Oxford]

The discontent with the standard text-book models is thus not new. We are talking here about a literature from the 1930s. However, my profession was dogged, if nothing else. The fact is that the theories presented in mainstream textbooks, particularly in relation to microeconomics, had little merit then and virtually no merit now.

Not a lot has changed.

In a paper by Michal Kalecki’s biographer, George R. Feiwel – On Kalecki’s Theory of Income Distribution – we read (Page 310):

His theory is not merely a deviation or departure from the neo-classical marginal productivity theory … He simply never started from it, but proceeded from a different approach in building his analytical construct and marginal productivity did not enter into his argument … Kalecki did not simply relax the restrictive assumption of universal rule of perfect competition. The model of perfect competition is foreign to his method of attacking economic problems. He argued that only by dropping the untenable assumption of perfect competition and penetrating the real world of industrial and market structures (imperfect competition and oligopoly) can any plausible propositions about determinants of macrodistribution be advanced … Kalecki’s theory of distribution ‘is important both because his theory is important in its own right and because it focuses attention on an aspect of distribution theory which had hitherto been neglected because of the preoccupation of earlier writers with the production function and perfect competition.’

The section in the above Feiwel quote that is itself quoted is from Frank Hahn’s 1972 book – F. H. Hahn, The Share of Wages in the National Income, London, 1972, pp. 2, 35, passim.

[References: Feiwel, G.R. (1974) ‘On Kalecki’s Theory of Income Distribution’, De Economist, 122(4), 309-325; Hahn, F.H. (1972) The Share of Wages in the National Income, London]

The point of that stroll down Kalecki-memory lane is to reinforce the notion that economics should jettison these flawed starting points. Students should only encounter the model of perfect competition in a course on the history of economic thought.

Holding it out as a benchmark for optimal outcomes, against which all policy has to be judged just sets up the discussion to reject almost any policy intervention as being bad.

The language used is prejudicial (re Charles Manski’s article) and the conclusions drawn plain wrong.

Time to move on.

Conclusion

There is a coherent macroeconomics available – it is called Modern Monetary Theory (MMT).

There is also a good body of microeconomic literature around – that is marginalised – which builds on the sort of insights that Kalecki (and Marx) and others provided to us many, many years ago.

Why do mainstream economists hang on to such defunct approaches? First, they are so arrogant and thick-skinned they don’t feel the humiliation of being wrong all the time. Second, they hate (as in religiously) the implications that arise when a more coherent economic framework, which is grounded in real world realities, is used.

That is enough for today!

(c) Copyright 2013 Bill Mitchell. All Rights Reserved.

This Post Has 20 Comments

  1. Seems Tyler Cowen was not so impressed with the Manski article as you were, Bill, as he worked into the wee hours of the following morning to hurry out “I am sorry, but this is absurd“, which might also be a good name for any response to his article.

  2. “mainstream economics is defunct and we should decommission teaching programs throughout the world and introduce new progressive approaches” ?

    Exactly, Bill. A similar concept is being heard from multiple directions. And not just in economics.

    The whole world seems to be seeing what Max Planck noted back circa 1905. Much of the difficulty in reconciling scale-related axioms arises from CONFUSING MICRO-SCALE AND MACRO-SCALE INTER-RELATIONSHIPS.
    http://www.academia.edu/1039620/The_odd_couple_Boltzmann_Planck_and_the_application_of_statistics_to_physics_1900-1913_

    That’s a topic we would do well to better DEMONSTRATE to all kids, by age 10. That simple step would better prepare citizens entering all emerging disciplines to be comfortable facing a lifetime of scaling organizational tasks.

  3. i.e., Uniquely different properties are expressed at every scale of every type of organization.

    We have zero predictive power! But seemingly unlimited adaptive power.

    Those little subtleties go a long way toward preparing citizens to explore any and all options with more personal and group agility …. faster/better/cheaper.

  4. “Loosening the original priors” of a utility-maximizing-only theory of the macroeconomy is like setting your dog to build his doghouse in the corner of the yard, but then finding that his leash was too short to reach the corner of the yard, and fixing the problem by getting a longer leash.


    I predict that the result will not be a doghouse. You may claim to a gullible visitor that the digging in the backyard is the dog preparing the foundation, but whatever the dog is doing, making progress on building the doghouse is not part of it.


    Utility maximizing theory has the advantage that it allows the description of microeconomic activities using obsolescent models from physics, but it has the disadvantage of being long since falsified as a comprehensive theory of human decision making. And for any macroeconomic models that are supposed to be rigorously based on a microfoundations, the fact that the microfoundations chosen are incapable of giving a comprehensive picture of the decision making of individuals and groups in society implies that whatever you are doing, it is not giving a picture of our macroeconomy … unless it is a picture with either gaping holes or fictions to cover for the gaps.


    The excuse for using a known-false model is the “as if” excuse ~ if the model that is estimated performs well enough, then its “as if” the model was true. However, the welfare results depend on the underlying preferences being the actual preferences of people ~ there is no valid welfare result from a mechanism that merely drives an effective imitation of the choices that people make.


    And of course General Equilibrium Theory is still dead (Ackerman 2002 [pdf]) ~ the rigorous results are from the 1970’s that while a general equilibrium position may exist, under a choice of a variety of impossible conditions regarding perfect information of future events, even under those impossible conditions we require additional provably false assumptions to prevent there from being an arbitrarily large number of those positions with arbitrarily unstable dynamics, which renders the general equilibrium model as useful for actually modelling a modern industrial economy as a field of fireflies are for reading a book at night ~ and so General Equilibrium modeling stopped being a hot topic for a while, like a politician that has suffered a scandal, only to come out of temporary semi-retirement as the shock of finding out that the approach is entirely untenable gave way to a set of pat talking points that economists give, and only if pressed by sniping from some referee that is not in with the program, regarding these “technical issues”, in a small part of their literature review for their “Computable General Equilibrium Models”.


    And what is particular depressing is how much effort is devoted to avoiding remembering things that economists had previously learned. Between the “Salt Water” New Keynesian Economics and the more “Fresh Water” approaches such as New Classical Economics, it seems quite fair to treat “New” in economics as an acronym for “No Effing Way”, since there is New Keynesian Economics is No Effing Way Keynsian Economics, and New Classical Economics is No Effing Way Classical Economics.

  5. Since Bill has indicated that readers should be alerted to the level of math involved, I think it not in-apposite to point out that the theory of second best involves partial differential equations – rather a lot of them. Those unfamiliar with these will find the paper difficult to understand in parts. That said, these equations do occur later in the paper and not near the beginning.

    The math in Second Best (published in the Review of Economic Studies in 1956-57) is more difficult than that in the Boltzmann-Planck article by Badino cited by Roger Erikson (originally published in Annalen der Physik on 1 January 2009, something not immediately obvious on the linked site), though it may well be unfamiliar.

    Mention of this potential difficulty should not be construed as a criticism of either Roger or Bill.

  6. Since we are dealing with historical issues, I would like to recommend Antoin Murphy’s The Genesis of Macroeconomics: New Ideas From Sir William Petty to Henry Thornton (1997) and George Feiwel’s intellectual biography of Kalecki, The Intellectual Capital of Michal Kalecki: A Study in Economic Theory and Policy (1975).

    Murphy has previously published a detailed account of the theory and practices of John Law (1997), only covered briefly in the above book, which covers the work of Petty, Law, Cantillon, Hume, Quesnay, Turgot, Smith, and Thornton and their attempts to come to an understanding, with mixed results, of the large-scale economic system, what we refer to as the macroeconomy (a term John K Galbraith thought incredibly ugly).

    Both books are well written, but Murphy’s is exceptional in this regard.

  7. While Bill is quite right to point out the necessity of being able to falsify scientific hypotheses, always waiting in the wings is a kind of get-out-of-jail-free card known as the Duhem-Quine gambit. I briefly describe this in a comment on a book review, published in a recent journal issue of Radical Statistics. As this is not yet available on the RadStats site, I must cite it here: http://rescipe.wordpress.com/2013/08/22/saving-the-hypothesis/.

  8. Roger: Max Planck said something else, which is very well known and may be more to the point,

    A new scientific truth does not triumph by convincing its opponents and making them see light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.
    Max Planck

  9. Reminds me of the (ongoing I think) conflict between Classical (Frequentist) Statistics and Bayesian Probability Theory. Bayesian methods, actually invented and developed by Pierre Simon de Laplace (one of Histories great Theoretical Physicists) were supplanted by the Frequentists, who had a basic misunderstanding of the meaning of probability. They had a stranglehold on the field for a century, and viciously attacked any heretics.

  10. @SteveK9

    I’m not certain what you mean by saying that the frequentists had a “basic misunderstanding” of the meaning of probability. They certainly had a different understanding than most Bayesians, but this does seem to me to constitute a misunderstanding. The frequency interpretation of probability works rather well for certain games, like craps and blackjack, on which it was based. That they had a stranglehold is certainly true, but this was assisted by the impractical computations required by Bayesian methods until the advent of the contemporary computer. Bayesian statistics is now coming into its own and being adopted as the standard for statistical analysis in a number of fields, like ecology, where before it would have been ignored. I would be wary at this stage of contending that Bayesian methods are universally applicable.

  11. Addendum:
    Sometimes, the frequentist interpretation of probability is blamed for a particular use of it in textbook versions of statistical hypothesis testing. What is taught in introductory statistics textbooks is usually a sort of cookbook version of Neyman-Pearson hypothesis testing. However, Neyman himself scathingly criticized this textbook approach, contending that it was oversimplified to the point of being simplistic and thereby misleading. His own approach was much more sophisticated and hence more complicated. Neyman-Pearson hypothesis testing methods differ to a significant extent from Bayesian methods, although there are overlaps. R A Fisher’s approach is different still. One could argue that all of these approaches have their proper applications yet differ about how extensive the areas of such applications might be. Bayesian methods can now be seen to be more broadly applicable than was previously thought and, where appropriate, is replacing the Neyman-Pearson hypothesis testing approach.

  12. Yes, SteveK9, but regarding Plank’s Observation, as the field of economics has shown, its also possible for a refusal to engage in the scientific approach to triumph by progressively taking over the bulk of the graduate degree granting departments in a country so that supporters of scientific approaches are the ones who appear to die out.

    Now of course on the evidence they don’t actually die out as such, but rather get pushed aside into the relative shadows, with a common understanding among the highest status departments that a broadly defined non-scientific approach is the definition of the field, and any other approach just isn’t really economics. Being academia, there will still be argument over finer distinctions within the non-scientific approach, which will give the illusion of scientific enquiry.

  13. Martin Heidegger may have got many things wrong, his naive membership of the Nazi party comes to mind, but in one observation I think he was spot on. He held that since Plato, both philosophers and scientists have overlooked the fact that all theories are derivative of
    a prior more basic, pre-theorectical way of being. I think that this ties in well with the observation that in simple distributions of data that are effectively independent of each other and randomly distributed. In the case of the distribution of bolts exiting a factory, ordinary statistical techniques work pretty well. However, whenever we get into the social sciences where the bolts are human beings, or as Heidegger calls them Dasein, these human beings are, unlike the bolts, neither independent nor random. The statistics used so confidently in the hard sciences are often not really applicable to the world of the social sciences and as a consequence, increasingly desperate mathematical economists are forced to make “wild implausible assumptions” to try hide the fact that the emperor has no clothes. The early researchers on chaos theory were, I think, right in their motto: “local randomness and global determinism”. That is, in the social sciences where humans interact with each other, where they are neither independent entities nor randomly distributed in their market interactions, you just can’t assume you can always scale up from the micro to the macro; if you do, you may be guilty of a “Fallacy of Composition”.

  14. Hi Bill, not sure if this is the write place to post this but I wanted to make a few probably naive comments/questions regarding a government applying MMT

    It seems to me that the real battle (assuming erroneously economic policy is at least somewhat decided by reasoned argument!!) with regards to implementing MMT wisdom is over interest rates, rather than the solvency issue. As I understand it, one of the main tasks for central bankers around the world is maintaining control of the overnight interest rate. Given that if a government liberated itself from the requirement to issue bonds to cover deficit then bank reserves would swell (making it hard for the central bank to keep control), the mainstream would reject MMT on these grounds. Do you agree? (BTW I’m aware that MMT has a compelling critique of rate targeting)

    Also, it seems to me that when MMT says “paying back debt isnt a problem for a government that issues its own floating fiat currency” you are really saying “it isnt a problem IF that government stops pretending it is tax/bond and target interest rate constrained”. Until that time it seems to me that interest on government debt is an issue (though perhaps minor for ours and other many countries) and the fear that taxes will be raised and services dismantled as a result is not entirely misplaced, again assuming the government in question keeps pretending its a quasi-household.

    One further point. Do you think that if an MMT party was to win the coming Australian election and abandon neoliberal fiscal policy they would suffer any ramifications on an international level? I realise that many MMT theorists are American and so dont really have this problem, but would the powers that be let Australia adopt MMT policies? I remember Michael Hudson pointed out that trading oil in euros not dollars is considered to be an “unfriendly act” by the US–would abandoning the voluntary constraints similarly harm US interests in any way?

    Sorry if that was a bit long, hope it’s coherent

    Enthusiast of the blog

  15. “I wanted to make a few probably naive comments/questions regarding a government applying MMT”

    After that it was all downhill unfortunately.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top