This is Part 4 in the mini-series discussing the relative merits of the basic income guarantee proposal and the Job Guarantee proposal. It is the ‘robot edition’. The march of the robots is the latest pretext that basic income proponents (including the IMF now) use to justify their policy advocacy. There is some truth in the claims that the so-called ‘second machine age’, marked by the arrival of robots, is not only gathering speed, but is different from the first period of machine development with respect to its capacity to wipe out human involvement in production. But the claims are somewhat over the top. Further the claims that these trends are inevitable are in denial of the basic capacities of the state to legislate in the common interest. While the innovations in technology will free labour from repetitive and boring work and improve productivity in those tasks, there is no inevitability that robots will develop outside the legislative framework administered by the state and overrun humanity (even if the predictions of robot autonomy are at all realistic). We will surely need to develop a coherent adjustment framework to allow these transitions to occur equitably and where they are not possible (due to limits on worker capacity) alternative visions of productive work are developed?
Further, the Job Guarantee is a better vehicle for handling these type of transitions and creating new forms of productive work. Adopting a basic income guarantee in this context just amounts to surrender.
What about robots?
As time has passed, basic income proponents have moved from a concern about unemployment driven by the fluctuations in the economic cycle to trend issues relating to the collapse of employment in general.
They use the increase in robots in production processes is another pretext for advocating the introduction of a basic income guarantee. Even the IMF is now buying into this line of argument.
The robot revolution could have profound negative implications for equality (Berg et al., 2016: 10).
[Reference: Berg, A., Buffie, E.F. and Zanna, L-F. (2016) ‘Robots, Growth, and Inequality’, Finance and Development, 53(3), 10-13. LINK .]
Berg et al. (2016: 11) contrast the “technological optimists” who believe that “technological advances raise productivity and thus output per person … the overall effect is a higher standard of living”, with the “more pessimistic, narrative … [which] … pays more attention to the losers”.
The pessimists focus on the decline of low skill jobs as a result of the computer age and the increased income inequality that they believe has resulted from this impact.
The conventional economic argument that the IMF deploys is based on a competitive labour market framework where the introduction of robots, which are perfect or near-perfect substitutes for labour, effectively increases the supply of ‘labour’ and suppresses wages.
They recognise that they “assume away unemployment” (p.11) in their analysis, and so the labour market adjustment is in terms of price (wages) only.
But then who will buy the increased output as capital (robots and traditional capital) becomes more productive and “there is more and more output to be shared among actual people”? (Berg et al., 2016: 11).
The IMF’s answer is that with wages falling, “the owners of capital do” as human “labor becomes a smaller and smaller part of the economy” (p.11)
If robots do not become perfect substitutes then a role remains for “humans’ special talents” which “become increasingly valuable and productive as they combine with this gradually accumulating traditional and robot capital.”
The message for unskilled workers remains the same. Their skills – routinised and repetitive tasks – are the ones that are likely to be perfect or near enough substitutes for robots.
The loss of jobs for these workers drives their wages down relative to skilled workers. So not only is the wage share overall falling (because capital is more productive than overall labour) – that is, income inequality rises, but also wage inequality
rises because of the increase in the wage differential between skilled and unskilled workers.
So the question that is raised by this hypothetical dreaming is what is the best way to ensure there is “sufficient aggregate demand when buying power is increasingly concentrated”? (Berg et al., 2016: 13).
How can society ensure that in the face of lower wages workers will retain an “ability to pay for health care and education and invest in their children”? (Berg et al., 2016: 13).
The IMF solution:
1. Tax the rising income on capital – that is redistribute the “income from capital”.
2. The redistribution would come via the introduction of a basic income guarantee.
The IMFs intervention into the debate, which no doubt will elevate the basic income proposal a few notches among mainstream economists, provides no new insights at all.
It is the same argument that has been used for centuries that predicts technology will create an employment void for workers.
The robot argument goes, more or less like this. Yes, there has been constant changes in technology which has impacted on types of employment on offer for hundreds of years and the C19th Luddites were wrong.
Even though machines have substituted for labour, sometimes at a rapid pace over the last few centuries of capitalist production, history tells us that there is also strong “complementarities between automation and labor that increase productivity, raise earnings, and augment demand for labour” (Autor, 2014: 5).
[Reference: Autor, D. (2015a) ‘Why Are There Still So Many Jobs? The History and Future of Workplace Automation’, Journal of Economic Perspectives, 29(3), 3-30.]
As Harold Bowen (1966: 9) noted:
The basic fact is that technology eliminates jobs, not work.
[Reference: Bowen, H.R. (1966) , Washington, US Government Printing Office.]
But the assertion of the modern-Luddites is that the so-called ‘second machine age’ marked by the arrival of robots, which is currently gathering speed, is different from the first period of machine development.
The claim is that in this ‘second machine age’, the capacity of robots to replace humans in work is unlike anything we have previously seen and exceeds the ability of humans to envisage and create new jobs that will sustain those displaced by robots.
We should be circumspect in using the term “This Time is Different” given it has marked the intervention by Reinhardt and Rogoff into the GFC debate with their suspicious spreadsheet analysis that has failed to predict anything of importance in the real world.
The “This Time is Different” syndrome is a convenient vehicle to distance one from previous predictive failures and to claim a renewed (and undeserved) credibility in the contemporary debate.
David Autor, Frank Levy and Richard Murnane argued in their 2003 study that the observed polarisation of high and low-paying jobs in the US labour market was the result of a decline in middle-level jobs as a result of routinisation.
[Reference: Autor, D.H., Levy, F. and Murnane, R.J. (2003) ‘The Skill Content of Recent Technological Change: An Empirical Exploration’, Quarterly Journal of Economics, CXVIII, 1279-1333.]
That insight motivated academic researchers Maarten Goos and Alan Manning to study the same phenomenon in the UK labour market. They argued in their paper – Lousy and Lovely Jobs: the Rising Polarization of Work in Britain
– that “workers in the middling jobs used to do routine tasks, while workers in lousy and lovely jobs do non-routine tasks” (Goos and Manning, 2007: 124).
Goos and Manning (2007: 131) show that:
… between the 1970s and late 1990s we have seen rapid employment growth in lovely jobs (mainly in professional and managerial occupations in finance and business services), some growth in lousy jobs (mainly in low-paying service occupations) and a decline in the numbers of middling jobs (mainly clerical jobs and skilled manual jobs in manufacturing).
[Reference: Goos, M. and Manning, A. (2007) ‘Lousy and Lovely Jobs: The Rising Polarization of Work in Britain’, The Review of Economics and Statistics, 89(1), 118–133. LINK to longer 2003 Working Paper.]
[Note to Bill: Check page numbers].
A 2013 study by Oxford University researchers – The Future of Employment – focused on estimating “the probability of computerisation for 702 detailed occupations” in the US labour market (Frey and Osborne, 2013: 3).
[Reference: Frey, C.B. and Osborne, M.A. (2013) ‘The Future of Employment: How Susceptible are Jobs to Computerisation?’, Working Paper, Oxford Martin Programme on Technology and Employment, September. LINK.]
The term “computerisation” refers to “job automation by means of computer-controlled equipment” (p.3). They argued that “with more sophisticated technologies” (p.4) meant that “computerisation is no longer confined to routine manufacturing tasks” (p.4) and that (p.4):
… manual tasks in transport and logistics may soon be automated.
This would spell trouble for many workers in the low-paid (“lousy”) jobs.
Further across the 2×2 matrix of job characteristics (rows: routine versus non-routine; columns: manual versus cognitive), Frey and Osborne (2013: 17) find that “computerisation is now spreading to domains commonly defined as non-routine”. The non-routine tasks were previously considered to be “not sufficiently well understood to be specified in computer code” (p.17).
These developments are impacting on both the manual and the cognitive non-routine tasks.
There study finds that (p.41):
47 percent of total us employment is in the high risk category, meaning that associated occupations are potentially automatable over some unspecified number of years, perhaps a decade or two.
The authors note that the possibility that some jobs can be “substituted by computer capital” (p.45) doesn’t mean they will be.
In particular, they note that “regulatory concerns and political activism may slow down the process of computerisation” (p.46). In other words, the state, responding to our concerns, has the capacity to prevent this process.
The example they use are ‘driverless cars’ and the government’s capacity to limit their access to roads. We will come back to that issue a bit later.
This theme was further elaborated on by MIT researchers Erik Brynjolfsson and Andrew McAfee in their New York Times Op ed (December 11, 2012) – Jobs, Productivity and the Great Decoupling.
They point to the divergence between labour productivity and employment growth in many advanced nations that began in the mid-1980s as evidence that robots are taking over.
Real wages growth has become flat and has also diverged from labour productivity growth. It used to be said that one of the stylised facts of capitalism was that real wages grew in proportion with productivity growth.
They term the divergence between productivity growth and employment and wages growth – the “Great Decoupling” – and argue that a major cause has been the “changing nature of technological progress”.
Their hypothesis is simple:
As digital devices like computers and robots get more capable … they can do more of the work that people used to do. Digital labor, in short, substitutes for human labor. This happens first with more routine tasks, which is a big part of the reason why less-educated workers have seen their wages fall the most as we moved deeper into the computer age.
The next stage of the decoupling will accelerate as computerisation becomes cheaper than even the cheapest human labour and “technologies … become more powerful”.
They concur with the ‘Lousy and Lovely Jobs’ scenario where future jobs will involve:
People who tell computers what to do, and people who are told by computers what to do.
The challenge is in their eyes to design “a healthy society” that can manage the transition of a “technology-fueled economy that’s ever-more productive, but that just might not need a great deal of human labor.”
Economists like Jeffrey Sachs, who seems to like ‘shock doctrines’, claims that (Sachs and Kotlikoff, 2012: 2) wonder if the “Luddites are now getting it right”, especially for “unskilled labor whose wages are no longer keeping up with the average”.
Sachs predicts that “Brainier machines … pose a threat to tomorrow’s workers, whether skilled or unskilled”. (p.4).
They suggest that (p.2):
What if machines are getting so smart, thanks to their microprocessor brains, that they no longer need unskilled labor to operate? Evidence of this is everywhere. Smart machines now collect our highway tolls, check us out at stores, take our blood pressure, massage our backs, give us directions, answer our phones, print our documents, transmit our messages, rock our babies, read our books, turn on our lights, shine our shoes, guard our homes, fly our planes, write our wills, teach our children, kill our enemies, and the list goes on.
[Reference: Sachs, J, and Kotlikoff, L. (2012) ‘Smart Machines and Long-Term Misery’, Working Paper 18629, National Bureau of Economic Research. LINK.]
Sachs last venture in ‘shock doctrines’ proved to be (predictably) a disastrous misadventure for Russia, although he claimed after he resigned from his advisory post in 1994 that he didn’t “even get a bit role” and that his “influence was essentially zero” (Sachs, 2000: 39). He laid blame at the US government and the IMF for rejecting his calls “to provide vital aid and debt relief” to Russia.
[Reference: Sachs, J. (2000) ‘Russia’s Tumultuous Decade: An insider remembers’, The Washington Monthly, March 2000, 37-39.]
We argue that it is highly likely that his societal shock predictions in the current setting will not prove to be anything much at all.
US academic David Autor, who is a leading authority on the issue of automation and employment recently concluded that while “(c)hanges in technology do alter the types of jobs available and what those jobs pay” (Autor, 2014: 5), the polarisation in wage movements where the “gains went disproportionately to those at the top and at the bottom of the income and skill distribution” (p.5) is “unlikely to continue very far into the foreseeable future” (p.5).
[Reference: Autor, D. (2015a) ‘Why Are There Still So Many Jobs? The History and Future of Workplace Automation’, Journal of Economic Perspectives, 29(3), 3-30.]
Autor (2014: 11) noted that many jobs which are characterised by “precise, well-understood procedures” can be “codified in computer software and performed by machines”.
But he coined the term the ‘Polanyi Paradox’ to refer to what he thinks is the “bounded” capacity for “this kind of substitution” (p.11) (see also Autor, 2015).
He considers the substitution to be finite (p.11):
… because there are many tasks that people understand tacitly and accomplish effortlessly but for which neither computer programmers nor anyone else can enunciate the explicit ‘rules’ or procedures.
[Reference: Autor, D. (2015) ‘Polanyi’s Paradox and the Shape of Employment Growth’, Federal Reserve Bank of St. Louis: Economic Policy Proceedings: Re-Evaluating Labor Market Dynamics, 129-77.]
In his 1966 book – The Tacit Dimension – US economist turned philospher Michael Polanyi wrote (Polanyi, 1966: 4) that:
I shall reconsider human knowledge by starting from the fact that we can know more than we can tell (emphasis in original)
[Reference: Polanyi, M. (1966) The Tacit Dimension, New York, Doubleday and Company, Inc.]
Accordingly, he conjectured that much of human knowledge is ‘tacit’ in nature and that the rules that allow us to ‘know’ things “cannot be put into words” (p.4). In a sense, we don’t know why we know things.
Much of our knowledge is the product of culture and tradition which is infused into our sub-conscious but acts to filter reality in particular ways that we are not immediately aware of. This is the tacit dimension.
Polanyi’s thesis, if correct, has significant ramifications for the claims that robots will take over the labour market.
Autor (2014: 11) says that “(f)ollowing Polanyi’s observation, the tasks that have proved most vexing to automate are those demanding flexibility, judgment, and common sense—skills that we understand only tacitly.”
Accordingly, he considers the implications about the use of robots goes beyond a discussion of the extent of substitution of machine for labour.
He argues (2014: 22) “that jobs are made up of many tasks and while automation and computerization can substitute for some of them, understanding the interaction between technology and employment requires thinking about more than just substitution”.
In other words, there are tasks that rely on our tacit knowledge which constrains the capacity of robots to replace humans in the workplace.
Those who argue that the ‘second machine age’ is somehow different must also demonstrate that the new wave of technologies have been able to over the ‘Polanyi constraint’.
Autor (2014) presents evidence that rejects this assertion although he also suggests that overcoming the ‘tacit dimension’ will represent the forefront of “engineering and computer science” endeavour (p.23).
The overall point is that we must assume that robots will replace humans in a number of areas, which then requires the displaced individuals transit into alternative skills and jobs.
So a progressive state needs a framework to make those transitions – something like the Just Transition Framework, first developed by the Canadian Union Movement in the late 1990s to deal with climate change. We return to a discussion of that framework later.
But to think that all jobs will be replaced is unfounded.
Autor (2014: 27) says:
I expect that a significant stratum of middle-skill jobs combining specific vocational skills with foundational middle-skills levels of literacy, numeracy, adaptability, problem solving, and common sense will persist in coming decades.
Why? He believes that (p.27):
… many of the tasks currently bundled into these jobs cannot readily be unbundled — with machines performing the middle-skill tasks and workers performing only a low-skill residual — without a substantial drop in quality.
The state has a central responsibility in this regard.
Where possible workers should be prepared through the education and training structures to work in these “middle-skill jobs”. If the state assumes that responsibility then transitions will be made and the employment crisis will be a figment.
Where it is not possible for workers to elevate their skills (for whatever reason) then alternative ‘low-skill’ jobs that are not as susceptible to substitution by robots need to be created.
It is here that the Job Guarantee becomes a central feature of the progressive transition.
The automation debate is typically constructed within a known vector of job types. The basic income proponents adopt this simplistic vision of the future.
Accordingly, the basic income proponents claim that as automation will wipe out a number of elements in this vector (the low-skill elements mostly), the only way to allow the affected individuals access to the distribution system (in a society becoming increasingly wealthy overall because of the productivity gains) is to introduce an income guarantee.
But suppose the vector of jobs is redefined by changing our concept of productive work. What we define as productive work is really only limited by our imagination, which relates back to our earlier discussion of the ‘gainful worker’ approach that has dominated the debate since the C19th.
In that sense, while the Job Guarantee plays an immediate and essential role in providing access to the distributional system without jettisoning the human need for work and accords with current societal goals which value contribution, it also provides a progressive (if not radical) framework for re-envisaging the concept of productive work.
We take up that discussion later.
Further, those who advocate an inevitable, technologically-driven decoupling between what is happening in the production side of the economy and what is happening in the labour market and the distribution system, overlook key points.
While there is a tendency in the literature to assume that this divergence was technologically driven and, in some sense, inevitable and unavoidable, an alternative narrative is more plausible.
One could argue that the introduction of mass assembly lines, which made workers much more productive compared to previous production methods, would have spawned the same income capture by the top-end-of-town as we now witness, had not society formed a strong sense of collective will and forced the state through the political process to ensure more equitable sharing of the income gains from the higher productivity.
Surely, if society acquiesced to the new technologies in the Post World War II period, the distributional consequences would not be all that different to what we have observed since the mid-1980s (see Ormerod, 1994 who makes the same argument).
[Reference: Ormerod, P. (1994) The Death of Economics, Faber, London.]
The clue is that the divergence between employment and real wages growth on the one hand, and, labour productivity growth on the other hand, has been a defining characteristic of the neo-liberal era, where capital strategically invested resources to break down the sense of societal collective and co-opted the state to attack trade unions and maintain persistently high levels of labour underutilisation, which have made it difficult for workers to pursue and receive real wages growth in line with labour productivity growth.
It has less to do with the technological shifts and a lot to do with the sense of compliance and acquiescence that workers have assumed in this hostile neo-liberal era.
This leads to the next observation.
From one perspective, the robots-will-rule-the-world assertions avoid the question of the human agency involved. Of course, robots are becoming increasingly agile and will probably be able to replace a large number of jobs that rely on routinised and predictable action.
That trend has been on-going since the capitalists worked out better ways of securing the surplus production.
But just as children were banned from the workplace in advanced nations as an act of social policy, the state has the capacity to determine how the technology that is developed is deployed.
We produce highly technological motor vehicles that can be driven at exotic speeds but we force them to obey limits that are well within their overall capacity. Why? Because we empower the state to protect our common interests.
If robots and computers threaten our very survival then it is somewhat far-fetched to expect that we will allow the state to be totally compliant and allow robots to take over and drive out humans from the workplace.
There will always be options and alternatives and it is the role of the state to create a legal framework which advances the interests of the citizens in general.
While the innovations in technology will free labour from repetitive and boring work and improve productivity in those tasks, there is no inevitability that robots will develop outside the legislative framework administered by the state and overrun humanity (even if the predictions of robot autonomy are at all realistic).
The more apposite question is what will happen to unskilled workers who are unable, even with training assistance, to make the transition from machine operator to machine maintainer or designer?
The answer is to develop a coherent adjustment framework to allow these transitions to occur equitably and where they are not possible (due to limits on worker capacity) alternative visions of productive work are developed?
We will need to envisage new jobs which are currently outside of what we consider, as a society, to be productive activities.
We will introduce what we call a Just Transition Framework, which has to become a central aspect of a progressive future.
This framework allows the benefits of technology to be enjoyed by all even if specific cohorts of workers have to change careers mid-stream.
We will argue that the basic income response to the challenge of robots is a denial of the prospects that a ‘Just Transition’ can provide.
Conversely, we will argue that a Job Guarantee becomes an integral aspect of a ‘Just Transition’ and provides a variety of productive options for individuals confronted with the need to change jobs and retrain as robots and other technologies impact on their work prospects.
In Part 5 (the concluding part of this mini-series) I will consider the concept of Just Transition, coercion, and more.
The text will ultimately be edited down to form a chapter in Part 3 of our new book (due out later this year) on the way in which the progressive debate has fallen prey of neo-liberal constructs.
The series so far
This is a further part of a series I am writing as background to my next book on globalisation and the capacities of the nation-state. More instalments will come as the research process unfolds.
The series so far:
The blogs in these series should be considered working notes rather than self-contained topics. Ultimately, they will be edited into the final manuscript of my next book due later in 2016.
That is enough for today!
(c) Copyright 2016 William Mitchell. All Rights Reserved.