By Harry Saunders
Allow yourself for a moment to picture a time; say 5, 10, 50 generations in the future. Imagine yourself in a world where everyone is consuming at a level that satisfies them. People need to work only a small fraction of their time, and the need for work diminishes persistently so leisure time is ever-rising. And the economic machine you see working there is automatically and continuously reducing its assaults on natural ecosystems globally.
Impossible, right? As it happens, neoclassical economics says not.
Surprising as this might seem, would it further surprise you to learn this economic machine has all the trappings of a free market, competitive, private-ownership economy, including producers who maximize their profits? And that this very dynamic is what propels and enables this “golden age” for households – and for the environment?
More Good Surprises
Before succumbing to (what must be) your deep suspicion that this all must be dreadfully theoretical and so cannot be trusted to depict reality in any meaningful way, there are other good things to be reported about this economy.
Forces at work in the economy act to limit or erase income inequality. Persistent poverty is disallowed. The economy stabilizes on what is called a “golden age path” where intergenerational equity is automatically assured.
And despite what you may have read, such an economy does not require growth to sustain itself.
Intriguingly, this state of affairs depends not on altruistic behavior but on natural dynamics arising when agents act in their narrow self-interest – it does not presume any rallying cry for people to act with the larger good in mind. It happens by itself, with or without altruism. It is a natural ecosystem, in other words.
Neoclassical Theory and Reality
That’s a lot to swallow, you say.
Once you can envision the commonsense mechanics at work, swallowing will be easier. But to help this along, let’s first attack a root of your concern: theory vs reality.
Because of the care with which generations of brilliant economic minds have assembled it, neoclassical economics points a very reliable finger at powerful economic forces at work in the real world. And while it might not seem so from the abstract-looking mathematics and seemingly arbitrary “assumptions” that infuse it, neoclassical theory today successfully surviving came down to us from economists possessed by a consuming passion for explaining real things about the real world.
All the above conclusions follow from a standard framework stripped to its bare essentials. Its theoretical foundation is undergirded by three massive pillars, each festooned with laurels of the Nobel variety. And actually a fourth pillar that should have been likewise festooned. That is, five of the great economists of past few decades – Kenneth Arrow, Gerard Debreu, Robert Solow, Edmund Phelps, Franco Modigliani (Laureates all) – built the three pillars economists know as general equilibrium theory, neoclassical growth theory, and neoclassical consumption theory. Ronald Shephard created the fourth pillar, called (somewhat obscurely) duality theory. Each of these accomplishments was hard won, and each represents a truly fundamental advance in our economic understanding.
But pedigree aside, what you really want to know is how cautiously you need to treat the disturbing fact that this theoretical foundation is, well, theoretical, and rests on challengeable assumptions. And what may be worse, embodies these assumptions in mathematical equations that seem offputtingly arbitrary.
The equations are neither arbitrary nor arbitrarily constructed. As for the assumptions, these are deeply commonsensical. Were you to read a clear, reader-friendly verbal description of each assumption, without looking at the mathematics, each would make sound sense to you. The mathematics is simply the economist’s way of making these statements perfectly precise. It allows economists to converse among themselves in shorthand, bless their nerdish souls. These conversants require that the assumptions and relationships be stated in a form that brooks no challenge as to their exact meaning.
Schumpeter’s Challenge
Numerous new economic approaches have arisen in recent years, including social network theory, agent-based modeling, etc. Without here explaining why (subject of a future post), none of these is inconsistent with what is pictured here. But Joseph Schumpeter deserves special attention.
The author of “creative destruction,” Schumpeter saw that progress in an economy was actually one that moves in fits and starts, where new developments arise not because returns to capital happen to be at some particular level or the like as traditional neoclassicists might have it, but rather directly out of the innovation of entrepreneurs. New production capacity arises, unpredictably producing new things and destroying old ones, because of Schumpeterian dynamics, not out of neoclassical necessity.
Nonetheless, the larger neoclassical framework encompasses Schumpeter’s idea, in the aggregate. Neoclassical technology trends – having effects identical to those in the Schumpeterian picture – allow accurate portrayal of the economy’s aggregate functioning when responding to innovation’s engine..
And how am I to believe this aggregate picture is the correct picture, you ask? An analogy will make this easier. For those familiar with the discipline of thermodynamics, you’ll know that the behavior of a gas is highly predictable, even though it is behavior arising from gazillions of tiny molecules, moving around randomly and chaotically. Boyle’s law and the “ideal gas law,” for instance, are extremely simple equations delivering simple relations among pressure, volume and temperature that are (for all intents and purposes) perfectly predictive. These laws’ physical validity was proved by the discipline of statistical mechanics to be directly connected to the random behavior of individual molecules. Things in aggregate somehow seem to be smoother and more straightforwardly predictable.
In like fashion, neoclassical laws govern molecular Schumpeterian chaos in the large, extracting visible simplicity out of underlying complexity. New goods, services, and industries – though we cannot know what these will be – will always require households to supply the needed capital and labor, and to consume the products offered.
The Mechanics of it All
Where credibility is best built is by looking at what the theoretical construct is saying in day-to-day language. While brevity of discourse prevents it, a full accounting would describe the natural mechanics of how households drive the entire economic machine via their contributions of labor and capital (savings) to producers; how producers in turn deliver households consumption goods and services; how households own the means of production; how the economy values uncompensated labor such as household operations and child care as equivalent to paid labor; how prices lock to quantities; and how the economy autonomously evolves toward maximizing “economic welfare,” something akin to maximizing the “size of the pie” left over as surplus from this economic machine after it’s fed.
And how all of this results in an economy that does not require growth to sustain itself but where producers get exactly what they need from households to run the machine in zero-growth mode indefinitely. All the subject of a future post, perhaps.
Where the Good Things Come From
But more exciting than how this economy functions is the way it behaves when allowed to unfold in its own way. Here we return to the aforementioned surprises. For instance, in this economy capital owners cannot claim a larger share of the pie than is due them. Natural forces engage to drive the economy always back to a balance wherein wealth derived from capital investments and from wages paid to labor both lock on a level that optimally serves household preferences. Capital and labor claims on wealth resist getting out of balance.
As if this weren’t enough, there is another startling thing about this future economy: persistent poverty is disallowed. Any economy suffering from under-capitalization and/or over-supply of labor (i.e., high unemployment) will automatically move to a condition of full employment and will generate sufficient productive capital stock for itself to match the satisfaction-maximizing consumption of households seen in any other economy. Natural economic forces act to drive toward this state of affairs.
One other, intellectually-arresting thing (last one, I promise): This economy will be on what is called a “golden age path,” so named by Nobel Laureate economist Edmund Phelps who first discovered such behavior. Phelps’ “Golden Rule of Accumulation” – christened thus because it follows the golden rule principle of “do unto others [i.e., future generations] as you would want them to do unto you” – is automatically honored. That is, at each point in time, the choices made by households occur in such a way that the welfare delivered to each succeeding generation will be no less than the welfare of the current one (and, with ongoing technology improvements, will be greater).
The Caveats
For this world to actually exist, it would have to be the case that the declining assault on natural capital is aggressive enough to preserve its health and ongoing viability. And in any case, the economy would have to maintain at a level that adheres to natural capital’s carrying capacity or below.
In this regard, the above portrayal of this economy makes one significant assumption: that the global human population has stabilized at some fixed level (or is rising ever so slightly). Current trends seem to point in this direction, but it is still an assumption. The framework furthermore makes no prediction as to whether any particular population size will be sustainable by way of honoring natural capital’s carrying capacity. It only predicts that under circumstances of a fixed population, consumption can remain steady and calls on natural resources decline. No doubt any particular consumption level will have a powerful bearing on the “Gaia-balancing” equation. The hard work ongoing by ecologists and a throng of others will be needed to determine the hard ecological limits of natural capital that will better enable us to solve this crucial-to-humanity equation.
One other thing: Even though calls on raw resources are declining, this is not the same as saying these come without assaults on natural capital beyond just the potential to exhaust some of them. If these resources generate emissions or other pollutants, they assault natural capital another way that further threatens sustainability.
But be reminded that we are looking generations hence. Is it really that difficult to envision a world generations down the road where means have been found to supply clean, cheap, and abundant energy? And to handle other raw materials in a way that makes them continually recyclable so as not to impinge on natural capital in any way that nature cannot replenish?
That is the story. If your skepticism remains unquenched, you are invited to test your misgivings against a recently-released article in Ecological Economics.
Final Word
The economic future of human civilization may not be quite so miserable as you or others around you might worry. Despite the fulsome challenges directly facing humanity today, there is a highly alluring future painted by good old neoclassical economics that may not in fact be that far beyond our reach; or at least not far beyond the reach of your progeny x generations hence. Pick the value of x for yourself. Then decide the direction you want to travel to help them arrive there.
Harry Saunders is the Managing Director of Decision Processes Incorporated. He has consulted at numerous Fortune 100 companies including Chevron, General Motors and Hewlett Packard. He is also a Senior Fellow at The Breakthrough Institute.
Didcot power station is yet another power station out of action, but what does this mean for UK security of supply this winter?
The fire last week at Didcot power station has led once again to cries of “the lights are going to go out this winter”. But people who ask whether or not the lights will go out are asking the wrong question. It is politically inconceivable to allow non-consensual power cuts to happen in the UK this winter; therefore the question we should be asking is, “how much is it going to cost us to keep the lights on, and are there ways of reducing the cost?”
When the Didcot B gas-fired plant unit caught fire, the UK electricity system lost around 680 Megawatts of power generation. There is as yet no indication of how long it will take to get the unit up and running again, but it could be out of action for the rest of the winter.[i] To put this in context, UK peak demand for electricity is usually just under 60 Gigawatts, meaning that the fire cost the UK around 1% of total peak electricity consumption.
The system is designed to deal with problems such as this; fires and faults at power stations are not so rare, and the UK has a spare capacity margin – a capacity cushion designed to deal with unexpected incidents such as fires – which ensures that the lights stay on even when power stations break.
However, what makes this event unusual is that it is the most recent in a chain of problems with big power stations. In February, a unit at Ironbridge was closed after a fire, and in July two units at Ferrybridge were also closed. Moreover, two nuclear reactors have been temporarily closed due to safety problems. So does this mean the lights are going to go out?
There is little doubt that the UK capacity margin is declining, and will continue to do so over the next few years.[ii] Yet a recent report showed that there is actually rather little agreement over what impact this will have on the security of UK electricity supply.[iii]
However, the debate around capacity margins tends to skate over the crucial issue of politics. Power cuts would be utterly unacceptable to the British public. In the 1970s, the ongoing capacity crisis caused by the miners’ strike was one of the key factors which eventually brought about a change of government, and no current government is going to risk their political legitimacy on this issue, especially not shortly before a general election.
Moreover, power cuts send a signal to businesses that a government is struggling to manage its infrastructure, which could deter investment. This means that the ‘chances of the lights going out’ is a misguided debate; instead, the question we should be asking is: how much is it going to cost to keep the lights on?
The new capacity market [iv] is currently getting a lot of attention in policy and academic circles. No-one is suggesting that procuring new capacity is going to be cheap; in fact, there are arguments to suggest that it may prove far more expensive than necessary. [v]
However, simultaneously Ofgem has revealed plans for a new package of contingency measures, including the National Grid’s new Supplemental Balancing Reserve, which for the first time includes plans for demand-side management to reduce peaks in energy demand and to allow our electricity system to work more efficiently.[vi] The media has framed these supplemental balancing mechanisms in highly negative terms, calling them ‘emergency measures’. But this is misleading; in fact, if we get it right, these demand-side measures could provide a much cheaper means of meeting the peaks in electricity demand.
Therefore, the fire at Didcot B power station provides an opportunity as well as a challenge. For the first time, the UK will get to find out how well its electricity system performs over winter with a slightly lower spare capacity margin. We can see how resilient our electricity system is to unexpected events such as fires. And more importantly, we get to find this out before we truly get into a problematic situation; after all, we still have most of our old coal and nuclear capacity up and running. For a long time, people have talked about focusing on the demand-side as well as the supply-side; well, this winter is our chance to do exactly that.
Emily Cox, Sussex Energy Group
Emily Cox is a PhD researcher with the Sussex Energy Group, focusing on electricity security in the context of a low-carbon transition. Her main research interests are energy security, UK electricity markets, stakeholder engagement and energy behaviour. She has recently worked as a researcher for the Royal Academy of Engineering, E.ON Technologies at the Ratcliffe-on-Soar power station, and the Oxford University Centre for the Environment. She has also worked for a variety of NGOs, including as a regional network coordinator for Greenpeace. Emily currently tutors an MSc course in Energy Policy for Sustainability at the University of Sussex.
Works Cited:
[i] McKenna, J. (2014) “Fire halves Didcot capacity”. Process Engineering, 20 October 2014
[ii] Ofgem (2014) Electricity capacity assessment report 2014. Ofgem, London
[iii] Royal Academy of Engineering (2013) GB electricity capacity margin: a report by the Royal Academy of Engineering for the Council for Science and Technology. Royal Academy of Engineering, London
[iv] DECC (2014) Electricity Market Reform – Capacity market: detailed design proposals. Department of energy and Climate Change, London
[v] Newbery, D. and Grubb, M. (2014) The Final Hurdle?: Security of supply, the Capacity Mechanism and the role of interconnectors. EPRG Working Paper 1412 / Cambridge Working Paper in Economics 1433
[vi] Ofgem (2013) National Grid’s proposed new balancing services: draft impact assessment. Ofgem, London