clock menu more-arrow no yes mobile

Filed under:

The "uncertainty loop" haunting our climate models

An atmosphere of uncertainty.
An atmosphere of uncertainty.
(Shutterstock)

In the early years of the battle over climate science, advocates and scientists went out of their way to stress how much was understood and relatively certain in the study of climate. This "science is settled" approach was a predictable response to the well-funded campaign of obscurantism launched by fossil fuel interests and their friends on the right, which cynically used uncertainty as an argument for delaying action.

Now that climate hawks are emerging a bit from their defensive crouch, however, more attention is turning to the many uncertainties that haunt climate. Consider these layers:

  1. To begin with: How will human economic activity this century translate into greenhouse gas emissions? How much will we emit? To answer that, we need to know how much population will grow, how much the global economy will grow, what per capita emissions will look like in 2050, 2080, etc.
  2. Which leads to: How will a rise in greenhouse gases translate into a rise in global average temperature? How sensitive is climate to greenhouse gases? (In the biz, "climate sensitivity" refers to the rise in temperature that would result from a doubling in global greenhouses gases from pre-industrial levels.)
  3. Which leads to: How will a rise in global average temperature translate into climate impacts (rising sea levels, etc.)? How do systems like ocean and air currents respond to temperature? What kinds of responses will be seen in different subclimates and latitudes?
  4. Which leads to: How will the impacts of climate change translate into impacts on human lives and economies? In other words, how much will climate impacts hurt us? How much GDP growth will they thwart (or reverse)? Will future people be richer and better able to adapt, or poorer because of climate change itself?

The really funny thing? The answer to 4 depends on the answer to 3, which depends on the answer to 2, which depends on the answer to 1, which depends on ... the answer to 4.

It's a loop. An uncertainty loop!

climate change uncertainty loop

Round 'n' round. (Javier Zarracina/Vox)

Basically, it's difficult to predict anything, especially regarding sprawling systems like the global economy and atmosphere, because everything depends on everything else. There's no fixed point of reference.

Grappling with this kind of uncertainty turns out to be absolutely core to climate policymaking. Climate nerds have attempted to create models that include, at least in rudimentary form, all of these interacting economic and atmospheric systems. They call these integrated assessment models, or IAMs, and they are the primary tool used by governments and international bodies to gauge the threat of climate change. IAMs are how policies are compared and costs are estimated.

So it's worth asking: Do IAMs adequately account for uncertainty? Do they clearly communicate uncertainty to policymakers?

The answer to those questions is almost certainly "no." But exactly why IAMs fail at this, and what should be done about it, is the subject of much debate.

On one hand, there are people who believe that making climate policy without models and scenarios to guide us is hopeless. They think IAMs can be improved, both in their accuracy and in the way they frame and express degrees of uncertainty. On the other hand, you have people who believe that the entire exercise is futile, that the faux precision of these models only misleads policymakers, and that the attempt to predict the far future should be abandoned in favor of a more values-based, heuristic approach.

Let's run through the critiques.

Save IAMs by better capturing their uncertainties

A major new working paper was recently released that bears directly on this question. It's called "Modeling Uncertainty in Climate Change: A Multi-Model Comparison," which probably sounds boring until you hear that it's "the first multi-model analysis of parametric uncertainty in economic climate-change modeling"! Ahem. Anyway, it's by some of the leading lights in the climate economics and policy worlds, including legendary environmental economist William Nordhaus of Yale.

The paper is fairly technical, but the upshot is that the IAM community is likely underestimating uncertainty (and, therefore, misleading policymakers).

There are lots of IAMs out there, probably a dozen or so in common use. The way researchers have typically tried to assess the degree of uncertainty in climate forecasting is by comparing projections across different IAM models. The spread between the projections is used as a stand-in for the degree of uncertainty involved. This is known as the "ensemble" technique, as it makes comparisons across an ensemble of models.

The authors argue that this is a woefully misleading approach. To explain why, they distinguish two sorts of uncertainty:

  • Model uncertainty has to do with how various structural features and functions of the models are specified. What variables do they include, and how are the variables treated? Once the parameters are input, how are outcomes calculated?
  • Parametric uncertainty has to do with the uncertainty of the parameters themselves. There may be one central estimate for growth in greenhouse gas concentrations and another for growth in per capita productivity, but one of those estimates may be far less certain (have a wider probability distribution) than the other.

This is not exactly intuitive, so let's look at an example.

One parameter that plays a key role in every model is "equilibrium climate sensitivity," which refers to how much temperature rise would result from a doubling of greenhouse gases in the atmosphere (relative to pre-industrial levels). There are varying estimates of climate sensitivity across different IAMs.

If you compare estimates of climate sensitivity across different IAMs (the ensemble technique), say the authors, all you'll uncover is model uncertainty — the way different models treat it, the different variables and calculations they use.

However, the authors say, it may just be that the models are all drawing on the same limited pool of data and research on climate sensitivity — that they are, in effect, sharing the same educated guesses. It may be that a given estimate of climate sensitivity contains more uncertainty within itself (parametric uncertainty) than there is between models (model uncertainty).

In the paper, the authors make what they believe is the first attempt to quantify parametric uncertainty in a selection of popular IAMs. It involves picking three key parameters and doing what sounds like a lot of tedious work with various modelers, attempting to standardize metrics and outcomes across models. They also develop a way to quantify the relative contributions of parametric and model uncertainty.

We'll skip to the conclusion: For most variables, model uncertainty represents less than a quarter of overall uncertainty. Most of the uncertainty in IAMs is parametric uncertainty. (The only variable for which model uncertainty is the majority is the social cost of carbon, probably because it's powerfully affected by choice of discount rate.)

The authors conclude that "relying upon ensembles as a technique for determining the uncertainty of future outcomes is (at least for the major climate change variables) highly deficient. Ensemble uncertainty tends to underestimate overall uncertainty by a significant amount." (my emphasis)

model
This guy is also uncertain about his model.
(Shutterstock)

There are tons of other interesting results buried in this monster paper — for instance, out of various parameters, uncertainty about future productivity growth has by far the largest implications for outcomes, "which suggests that uncertainty in GDP growth dominates the uncertainty in emissions" — but I don't want to bore you, so let's move on.

Saving IAMs by reducing their uncertainties

One of the biggest knocks against IAMs is that many of their key variables are, to put it technically, pulled out of modelers' asses.

For instance, how much will rising temperatures impact the overall macroeconomic productivity of an economy? In other words, what is the "damage function" of rising temps? That's obviously a key question for determining outcomes, but to date, the damage functions used in IAMs have generally been produced via the rectal extraction procedure described above.

Lately, though, there's been some great empirical research on these questions. On rising temperatures, for instance, there have been a lot of actual observations, at the local or regional level, of how heat impacts productivity. So many researchers — see here for a widely hyped paper on the subject — are setting out to update IAMs with better, more empirically informed parameters and functions. They are reducing uncertainty the old-fashioned way, by closely observing the world.

These researchers are convinced that IAMs, even if misleading now, can be made useful through this sort of research. (For another such argument, see here.)

Some analysts say IAMs are so misleading they should be tossed

Consider, again, the layers of uncertainty described at the top of this post. Consider the assumptions upon assumptions upon assumptions required to squeeze all those uncertainties into specific ranges of numbers, and then the additional assumptions required to model how those various uncertain phenomena will interact with one another.

Or to put it another way: Think about how insane it is to try to predict what's going to happen in 2100.

There is a school of thought that says the whole exercise of IAMs, at least as an attempt to model how things will develop in the far future, is futile. There are so many assumptions, and the outcomes are so sensitive to those assumptions, that what they produce is little better than wild-ass guesses. And the faux-precision of the exercise, all those clean, clear lines on graphs, only serves to mislead policymakers into thinking we have a grasp on it. It makes them think we know exactly how much slack we have, how much we can push before bad things happen, when in fact we have almost no idea.

In the view of these researchers, the quest to predict what climate change (or climate change mitigation) will cost through 2100 ought to be abandoned. It is impossible, computationally intractable, and the IAMs that pretend to do it only serve to distract and confuse.

fortuneteller
Might as well use one of these.
(Shutterstock)

I covered that argument at some length in this post, if you want a nice, nontechnical rundown. This is another short, accessible take. See also MIT's Robert Pindyck, whose 2013 contribution has an abstract so punchy that I'm going to quote the whole thing. The paper's called "Climate Change Policy: What Do the Models Tell Us?"

Very little. A plethora of integrated assessment models (IAMs) have been constructed and used to estimate the social cost of carbon (SCC) and evaluate alternative abatement policies. These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g., the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models’ descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.

In this telling, IAMs inherently exaggerate our certainty; they have to, to make the numbers run.

The point about "catastrophic climate outcomes" is important, and the basis for another common critique of IAMs. The charge is that IAMs can only model continuous damage functions — that is, damages that rise smoothly and continuously. They are incapable of dealing with discontinuities, with sudden, nonlinear changes. These are the "tipping points" people are always worrying about, wherein some natural or social system, subjected to continuous stress, experiences a rapid, lurching phase shift to a different state. Some argue that cost-benefit analysis — of which IAMs are an elaborate form — are intrinsically incapable of dealing with such catastrophes.

If not IAMs, then what?

Many people, even when confronted with the shortcomings of IAMs, are loath to let them go, for the simple reason that they don't see any alternative. If you don't do your best to tally up all the forces and costs involved and weigh them against one another, well, what else would you do? Just guess? Make policy on the basis of instinct and ideology? Better a flawed guide than no guide at all, right?

This is a big subject, deserving of its own post, but it's worth citing one (probably the most popular) alternative.

A Harvard climate economist named Martin Weitzman has, for several years now, been mounting a counterargument to the use of IAMs (and conventional cost-benefit generally) to assess climate policy. The best expression of the argument remains his influential 2009 paper "On Modeling and Interpreting the Economics of Catastrophic Climate Change." (See also last year's "Fat Tails and the Social Cost of Carbon" and his new book with economist Gernot Wagner, Climate Shock.)

In a nutshell, Weitzman argues that climate risks have "fat tail" distributions. In a normal bell-shaped probability curve, the sides drop off quickly — the risks of more extreme outcomes (the tails on either end) fall quickly to zero. But in a fat-tail distribution, risks fall off more slowly at the tails. There are small but non-negligible risks of very extreme outcomes.

fat tail risk (Climate Shock, by Gernot Wagner & Martin Weitzman, via NYRB)

In the case of climate change, there are small but non-negligible risks of outcomes so extreme (say, temperature rise of 8° C or more) that they threaten the continuation of advanced civilization. The cost of these extreme scenarios is, for all intents and purposes, infinite. In a cost-benefit analysis, it follows that they are worth literally anything to avoid.

That can't be right, though. It can't be that we should spend literally any amount of money to avoid the small chance of catastrophe; there are, after all, other kinds of possible catastrophes. That bill would get really big, really quick.

In other words, Weitzman argues, fat-tail risks break cost-benefit analysis. (There's a mathematical way of expressing this, but it makes my head hurt.) He thinks we need a new metaphor, a new way of approaching the problem.**

He suggests we view climate change mitigation as a kind of insurance. We pay for fire insurance for our homes, not because we think a fire is likely, but because the price of a fire would be so large if it occurred. Unlikely as it may be, it's worth the money to hedge against it. This isn't an exotic approach — market traders and analysts price risk all the time.

What would that mean for policymaking? This paper, from (the aforementioned) Frank Ackerman and colleagues, puts it pretty well:

A better approach to climate policy, drawing on recent research on the economics of uncertainty, would reframe the problem as buying insurance against catastrophic, low-probability events. Policy decisions should be based on a judgment concerning the maximum tolerable increase in temperature and/or carbon dioxide levels given the state of scientific understanding. The appropriate role for economists would then be to determine the least-cost global strategy to achieve that target. While this remains a demanding and complex problem, it is far more tractable and epistemically defensible than the cost-benefit comparisons attempted by most IAMs. [my emphasis]

This strikes me as a better use of modeling than an attempt to specify exactly what our targets should be, based on scenarios plotting out the next 50 to 100 years of industrial civilization. Such scenarios have always seemed faintly absurd to me, for reasons more commonsense than mathematical.

Scenario building makes more sense as a way of testing policies against one another, a way of navigating toward goals that have been determined the old-fashioned way: through compassionate values, empirically informed judgment, and strategic political organizing. Models are best seen as tools, not masters or oracles.

Further reading:

There's been lots of good stuff on climate policymaking in the face of uncertainty, including several past posts from yours truly. A selection:

And here's some other recent work on the subject:

There's no doubt much, much more out there. If you know of particularly good stuff, send it my way and I'll add it to the list.


** It's worth noting that Weitzman's work is controversial in climate wonk circles. Nordhaus wrote a comprehensive response to Weitzman; Weitzman in turn reviewed Nordhaus's book; Nordhaus in turn reviewed Weitzman's new book. There have been many more exchanges, genial but sharp, and many other wonks have weighed in as well. You are welcome to Google if you're hungry for more.

Nordhaus's main argument is that many or most climate risks do not in fact have fat tails, so Weitzman's analysis is limited. In fact, he makes that argument in the paper I cited above.

I'm not qualified to settle this dispute, obviously, but my gut says the lack of fat tails Nordhaus cites tells us more about IAMs than the risks themselves — it's an artifact of their assumptions. Either way, I think the insurance metaphor is powerful, even apart from the details of Weitzman's analysis. Your mileage may vary.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.