The social costs of carbon refers to the marginal costs of impacts caused by every extra ton of greenhouse gases released into the atmosphere. Social costs are not strictly market-related, and can entail negative effects on environmental quality and public health. It is used to determine policy action, and to understand where and to what extent emissions need to be reduced. Normally, emission reduction targets are established to stave off further GHG emissions, but even keeping atmospheric carbon at current levels is proving to be unsustainable and economically unwise. The technology needed to actively remove carbon from the atmosphere and return us to pre-industrial levels already exists, but implementation is being bogged down by politics and persistently high costs. Governments need to invest in removing existing carbon from the atmosphere, while also reducing their current and future emission rates, because the price of delayed action will be soberingly high.
Current atmospheric levels of carbon dioxide are estimated to be around 416 ppm (parts per million). The last time our planet’s atmosphere held this much CO2 was millions of years ago, long before modern humans even existed. This was called the Pliocene geological era, when temperatures were on average 3° C warmer, although at northern latitudes summer temperatures could exceed 14°C. During this era, the West Antarctic ice sheet was significantly smaller than it is now, leading to a much faster rate of sea -level rise. Even a partial melting of the West Antarctic ice sheet, which is already underway, would cause sea levels to rise by at least one metre.
Of the several carbon measuring stations in the world, the Mauna Loa Observatory in Hawaii is considered the gold standard for consistent year by year measurement of atmospheric CO2. The Keeling Curve, named after American scientist Charles Keeling who started the programme at Mauna Loa, is informed by the observatory’s recordings, and is considered one of the most accurate graphings of recent atmospheric CO2 accumulation. For its first recording in 1958, the Mauna Loa Observatory determined atmospheric carbon dioxide to be at 316 ppm.
For data on atmospheric carbon levels prior to 1958, climate scientists have relied on ice cores, remnant fragments of ancient ice stored deep below the surfaces of the Earth’s glacial fields and ice sheets. When accumulated snow freezes faster than the rate of glacial melt, bubbles of air preserving traces of the atmospheric makeup at the time become entombed in stratified layers of ice. These ice cores can be analysed to determine the past changes in atmospheric gases, including CO2.
The deepest ice cores on Earth have been retrieved from Antarctica, the oldest of which allowed scientists to determine atmospheric CO2 levels from as far back as 2 million years ago. While carbon in the atmosphere and global climate trends have fluctuated massively over the course of the Earth’s past, ice cores reveal that our current level of atmospheric carbon concentration is an obscene abnormality, as for the past several thousand years of geological history CO2 levels never drifted outside of a comfortable 270-290 ppm range.
This changed in the mid-1770s, when the Industrial Revolution kicked off the era of fossil fuels and mass industry. The targets declared by the Intergovernmental Panel on Climate Change and outlined in the Paris Agreement aim to keep temperature rise to 2 or ideally 1.5°C above pre-industrial levels, which could translate to 450 or 430 ppm respectively in terms of atmospheric carbon concentration.
It is important to note that these are mostly estimates. In addition to CO2, there are several other greenhouse gases in the atmosphere that cause warming, such as methane and nitrous oxide. Atmospheric CO2 concentration and temperature rise are also not always directly correlated, as even marginal changes in atmospheric GHG levels can cause the Earth’s climate systems to behave in complex and unpredictable ways. However, CO2 is the largest contributing greenhouse gas to climate change by far, and on average, atmospheric buildup of these gases are causing the Earth’s temperatures to increase.
When put in terms of atmospheric CO2 concentration, climate change sounds like a different beast. A temperature rise of 1.5 or 2 degrees may not seem to be all that much, but the concentration of atmospheric carbon dioxide would make a dramatic jump from 280 to 450 ppm in a 2°C rise scenario. And this is if we are even able to keep temperature rise that low. Any higher would be absolutely catastrophic, but it is becoming increasingly clear that even stabilising close to current levels would irreparably damage human society and the survivability of all organic life on Earth.
We need to be deploying both natural and man-made means to actively remove the carbon that is already permeating our air. Emission reduction targets want to create a ‘new normal’ of sorts, using pre-industrial levels as a baseline but focusing on keeping temperature rise and atmospheric carbon levels at a certain rate above it. This is a risky approach, and precedes dangerous rhetoric. We need to begin thinking of climate change mitigation in different terms: a new normal is not sustainable, so what must be done to return to pre-industrial atmospheric carbon levels?
The Social Cost of Carbon
The social cost of carbon is a tool that quantifies what the economic impacts of today’s emissions will be in the future, and how this would compare to the cost of removing atmospheric carbon. For instance, if CO2 emissions were assigned a cost of $100 per tonne, any investment of $100 or less to remove a single tonne of emissions would be economically wise. This means that an investment of $100 million to remove one million metric tonnes of CO2 from the atmosphere would be economically beneficial and save governments money in the long run.
The social cost of carbon defines the upfront costs of atmospheric removal or emissions reductions relative to the cost of future damages. Since climate change by its very nature tends to be unpredictable and uncertain, however, virtually all estimates are overly conservative. In the US, the Biden administration recently reevaluated their social cost of carbon to $51 (the cost had rarely drifted from a $1 to $6 range during the Trump administration), but even this number pales relative to what atmospheric carbon may actually end up costing humanity.
A 2018 study calculated the global social cost of carbon based on how developed and developing countries would be affected differently, and found that the total cost could be from $177 to $805, although even this range was only expressed with 66% confidence. A 2020 study was even more ambitious, attempting to measure the costs of burning fossil fuels for a long-lived human civilisation over a one million-year timescale. Best estimates from this study found the social cost for each tonne of carbon released to be around $100 000.
The social costs of carbon increase as time goes on simply because there will be more CO2 to remove from the atmosphere in the future if emissions continue to rise, and removal will have to be scaled appropriately to avoid complete ecological collapse. To do this, we need scalable carbon capture mechanisms, natural and man-made, but this technology is expensive to implement and scale. Those cynical towards carbon capture say that the technology is too expensive, and we should be focusing on expanding our capacity for cheaper carbon-free energy sources such as solar and wind. But removing atmospheric carbon is critical to averting the worst impacts of climate change, and carbon capture technology would have a key role to play in this. Time presents a challenge, however, since delaying action and removing carbon from the atmosphere only after emission rates have continued unabated for year would be economically devastating.
A 2016 study headed by James Hansen, former director of NASA’s Goddard Institute for Space Studies and one of the first harbingers of the dangers of anthropogenic climate change, attempted to answer the question of what the financial and social costs of climate change mitigation and carbon removal could be over the next century. If our emissions do not drastically fall now, we would be headed towards a near future with atmospheric carbon levels at over 500 ppm. This scenario would probably entail a 4°C temperature rise, a truly dystopian future. Coastlines will be swallowed, the tropics will become virtually uninhabitable and large ice sheets such as the one covering Greenland will melt completely.
In such a scenario where reducing emissions is delayed to a later date, the costs would add up quickly, incurring up to USD$6.7 trillion a year for 80 years after this date. The bulk of these expenses would be to cover the costs of removing the higher amounts of atmospheric carbon.
Most climate scientists agree that 350 ppm would be the highest ‘safe’ level of atmospheric carbon dioxide. Hansen himself has stated that: “If humanity wishes to preserve a planet similar to that on which civilisation developed and to which life on Earth is adapted, paleoclimate evidence and ongoing climate change suggest that CO2 will need to be reduced from [current levels] to at most 350 ppm.”
350 ppm would entail a temperature rise of between 0.5 and 1°C. Keeping the increase to this level might keep sea level rise at a manageable level, the tropics would still be habitable, food insecurity would not become a pervasive global phenomenon and natural structures such as ice caps and coral reefs may survive. To reach these levels, Hansen’s team estimates that massive investments would need to be made in carbon sequestration systems. This would include planting trees on an unprecedented scale, rehabilitating natural carbon sinks and scaling man-made technologies. The total cost would be significantly lower than that of delaying action until levels surpass 500 ppm, although remain high at an annual price tag of between $100 billion and over $1 trillion.
This is what would be considered a ‘safe’ level, at around 1°C temperature rise. But it is no longer a hypothetical scenario; it is a world we have been living in for several years now, and one we are rapidly leaving behind given that atmospheric CO2 levels are increasing by close to 3 ppm a year. Maintaining current conditions may well be a more manageable proposition than adapting to the consequences of 500 ppm and above, although we will still have to contend with several adverse side-effects. We need only consider the intensity and frequency of wildfires, hurricanes and heat waves that have impacted the world in the past few years alone. Even with 350 ppm, increased rates of biodiversity loss, droughts, food shortages and worsening public health conditions will have a direct impact on human society.
Because even with what has been called a ‘manageable’ level of atmospheric carbon dioxide, there is no safe amount of atmospheric greenhouse gases above pre-industrial levels that can be retained. What politicians have failed to properly internalise is that pollution and emissions are our fault and place an inconceivable burden on future generations. For every gram of greenhouse gas we release into the atmosphere that we don’t sequester, they will pay a price. On the topic of owning the responsibilities for our emissions, University of California glaciologist Eric Rignot has said: “All the pollution we put in the air, we’re going to have to take it back.”
The Carbon Capture Debate
So how could we return to under 300 ppm? It is certainly vital that industries stop burning fossil fuels and transition to carbon-free alternatives immediately, but even this will probably not be enough. Doing so would still leave us at levels above 400 ppm, making us vulnerable to climate change for centuries to come. While some powerful greenhouse gases have a remarkably short residence time in the atmosphere (methane particles for instance usually only last 12 years), CO2 particles can remain in the air and affect the climate for thousands of years after being released. Lingering CO2 causes positive feedback loops to repeat for years to come, such as lower albedo in the polar regions and thawing permafrost releasing methane, impacting the lives of countless humans over extended timescales.
Negative emissions therefore need to be reached, where we are removing more carbon from the atmosphere than we have put in. The IPCC estimates that 1 000 gigatonnes of CO2 need to be removed from the atmosphere before 2100 to comfortably stay in line with the 1.5°C temperature rise goal. Returning to pre-industrial levels will require even more investment.
The good news is that the means to do so, natural and artificial, already exist. Man-made carbon capture and storage systems (CCS) have been around since the 1920s, when natural gas companies implemented machines to separate carbon dioxide from marketable methane gas. Carbon sequestration is also a naturally occurring process in carbon sinks, and is in fact one of the most economically beneficial services ecosystems are able to provide humans with, given that terrestrial vegetation and soil can combine to absorb 40% of annual anthropogenic CO2 emissions, and oceans alone can absorb almost a third of global emissions.
The bad news is that financial incentives to employ CCS technology are still lacking. Despite recent technological advancements and innovation, the costs of implementing CCS remains high. With current technology, it would cost between USD$100 and $300 to remove one metric tonne of carbon dioxide from the atmosphere. This means that at the cheaper end, it would cost USD$100 trillion to remove the 1 000 gigatonnes of atmospheric carbon recommended by the IPCC, around 114% of current global GDP, and even this would only be enough to keep warming at 1.5°C. Costs of implementation will presumably fall as new technology is developed, although most current projections aim to bring costs no lower than $30 per metric tonne of CO2, which would still make CCS prohibitively expensive to implement on a wide commercial scale.
The debate over carbon capture is not simply based on price, which is based on approximations and projections of future technologies. Critics of CCS technology claim that these systems embolden fossil fuel companies to continue burning hydrocarbons and adding to global emissions. For their part, fossil fuel companies have touted bringing carbon capture to scale, since the costs for them to update their infrastructure with new technology is minimal compared to that of paying a hefty carbon tax. The criticism is that carbon capture would create a sense of complacency around fossil fuels and slow down the transition towards 100% renewable energy use.
Atmospheric CO2 removal is considered a form of geo-engineering, which has itself attracted substantial criticism and apprehension. Geo-engineering, or climate engineering, is the process of humans deliberately intervening and changing aspects of the world’s climate through artificial means on a large scale. Part of the concern is that we simply do not know if geo-engineering methods of mitigating climate change would be effective, and reliance on them in the case of failure would be catastrophic to our prospects. It is also possible that geo-engineering could cause unintended and irreversible consequences and feedback loops to the global climate.
All these concerns are valid, but the regrettable conclusion that most climate scientists are arriving at is that we are well past the point where debate is constructive anymore. Whether our objective is to stabilise at 350 ppm or ideally under 300 ppm, the active removal of atmospheric carbon has to become an integral part of this goal.
At this stage, the argument needs to be condensed to two points. First, in no way should carbon capture ever be considered a replacement for lowering emissions. Second, we have to do both if we want to reach carbon neutrality and eventually negative emissions.
Natural carbon sinks can be a very effective means to absorb atmospheric CO2, and all efforts should be made to preserve remaining carbon sinks and maximise the capacity of super-efficient sinks, especially coastal and marine ecosystems, to do so. However, the argument that replanting trees is the ultimate carbon sequestration solution has several fallacies. A 2019 study found that in the current climate we could potentially plant around 900 million hectares more worth of forests, which would absorb 205 gigatonnes of CO2, reduce atmospheric carbon dioxide concentration by 25% and even make us carbon neutral for the next 20 years if current emission rates hold.
While that may sound appealing, 900 million hectares is nearly the size of the continental United States. Trees also require time to reach maturity and need to be cared for and protected while growing. NASA scientist Sassan Satachi said on the subject: “If we follow the paper’s recommendations, reforesting an area the size of the United States and Canada combined could take between one and two thousand years, assuming we plant a million hectares a year and that each hectare contains at least 50 to 100 trees to create an appropriate treetop canopy cover.”
Natural carbon capture methods take time to develop and mature to their full potential, something which we do not have much of. Existing carbon sinks need to be preserved and protected from human encroachment, but it is becoming increasingly clear that man-made carbon capture technology will have to play an important role in returning us to below 300 ppm.
While the costs may still be prohibitive, future technological developments will undoubtedly change how carbon capture is seen in a commercial sense. Artificial carbon removal is fast becoming a necessary measure, and removing carbon from the atmosphere will become more expensive the longer we wait to do it at scale.
The Future of Carbon Markets
There are so many potentially game-changing carbon capture technologies in development, all of which are exciting and promise a multitude of benefits and applications in a variety of sectors. In addition to the tech aspect, carbon capture systems could provide a further incentive to create a functional carbon market.
In February 2021, Elon Musk of Tesla launched the $100 million XPrize Carbon Removal competition, inviting teams to submit their scalable designs for carbon capture implementations that could remove one gigatonne of CO2 a year from the atmosphere. A major goal of the competition is to uncover implementations that can allow consumers to capture carbon out of the air and sell it to valorise and create a market for carbon.
If captured CO2 were a marketable commodity, it would have valuable applications in construction, synthetic fuel production and as a feedstock for industrial and chemical processes. Under good policy, the combined value of markets for captured carbon dioxide would surpass $1 trillion by 2030, with ample room for potential growth.
If new technologies are successful, and developers are able to scale them sufficiently, carbon capture technology could become something ubiquitous in modern infrastructure. Much like homes equipped with solar panels created a more consumer-driven energy market, buildings equipped with carbon capture applications could help create a carbon market with the consumer at its centre.
Carbon capture technology has a long way to go, and many uncertainties over economics and applications remain, but it is becoming increasingly crucial to our hopes of resolving the climate crisis. Reducing emissions should still be at the forefront of our concerns, but we are well past the point where a carbon neutral world would be sustainable or even economically viable. We need to deploy every possible technology, resource and instrument at our disposal to return the Earth’s atmosphere to a state that is habitable both now and in the future to ensure that the social costs of carbon do not become irreversible.
Featured image: Flickr
You might also like: The Benefits of Voluntary Carbon Markets