.

Thursday, October 3, 2013

About that "Hiatus" - IPCC climate models and recent observations

Sou | 1:47 AM Go to the first of 21 comments. Add a comment

This week the fashion is shaping up to be a look at models vs observation.  Some deniers keep saying  the comparison of recent observed surface temperature anomalies with models is not covered in the new WG1 report but all that shows is that those deniers haven't read the WG1 report.  For example, Judith Curry writes (archived here):
What is wrong is the failure of the IPCC to note the failure of nearly all climate model simulations to reproduce a pause of 15+ years.
Once again, Judith Curry is wrong.  I believe she's wrong to call it a "pause of 15+ years", particularly as this starts close to the extremely hot year of 1998 after which the temperature dropped below the trend only to rise higher still later on.  The first decade of this century was the hottest in the instrumental record and 2010 was equal hottest year ever recorded (with 2005).  If I were to say there was a "hiatus" I'd say it started around 2005, which is hardly long enough to even call it a "pause".  The following animated chart illustrates why I say that and so does the chart further down in this article:

Data source: NASA

Regardless of when the "pause" started, Judith is definitely wrong to imply that the report doesn't address the difference between model average and observations in recent years.  (There is a difference between observations and model averages, but the observations are still within the range of modeled temperatures.)

The IPCC WG1 report does note that there is a difference - and at some length.  I started with the technical summary from page TS-26, which has a box labelled:
Box TS.3: Climate Models and the Hiatus in Global-Mean Surface Warming of the Past 15 years

Before getting too worked up about any discrepancy, I thought it worthwhile to look at how well models and observations lined up in the past.  This strikes me as being relevant because models don't claim to match observations closely on a year to year basis.  They are looking at longer periods of time - those relevant to climate change, not weather change.

If you are on the home page, click here to continue.


Here is an animated chart of part of Figure TS.9 (a) (page TS-93) from from the WG1 Technical Summary showing CMIP3 and CMIP5 model runs for which I've highlighted parts where observations seem to diverge most from the model means:


For the most recent period after 2000, CMIP5 is closer to observations than CMIP3, but the observed surface temperatures, while within the range of both CMIPs, are lower than both CMIP averages (the red and blue lines).  Notice that they don't diverge until around 2005, like I argued above.

One reason for people getting hung up on the last decade could be that for the previous forty years or so, going back to around 1960, the observations were remarkably closely aligned to the model averages.   However as shown in the above animation, there were also times when the observations veered away from the model averages though, just as now, they stayed within the range of model runs.  From the IPCC report (my bold italics):
Fifteen-year-long hiatus periods are common in both the observed and CMIP5 historical GMST time series. However, an analysis of the full suite of CMIP5 historical simulations (augmented for the period 2006–2012 by RCP4.5 simulations) reveals that 111 out of 114 realisations show a GMST trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble (Box TS.3, Figure 1a; CMIP5 ensemble-mean trend is 0.21°C per decade).
 Here is Figure 1 a, b and c from Box TS.3 (click image for larger version):


The HadCRUT4 observations are to the far left of the frequency distribution of the CMIP5 models.  Compare that with the period 1984 to 1998 where the observations are to the right of the frequency distribution.  However overall the period from 1951 to 2012 the observations are closely aligned with the mode of the frequency distribution.

Why is there a discrepancy between recent observations and climate models?


The IPCC report discusses the discrepancy under three headings:
  1. internal climate variability, 
  2. missing or incorrect radiative forcing, and 
  3. model response error.
Regarding internal variability the report points out that sea level has continued to rise and the oceans are still accumulating heat.  So it's highly likely that the earth system is still accumulating energy, more in the deeper ocean than in the top 700m layer.  It's just that the energy being  accumulated is not going into heating the air.

In regard to missing or incorrect radiative forcing, that doesn't seem to be the case if I read the report correctly.  The report discusses how solar forcing dropped in the first decade of this century.  In addition there were a number of smaller volcanic eruptions that could have affected the amount of radiation being stored in the earth system.  However there is nothing to suggest the models aren't accurately reflecting the amount of incoming radiation, according to the report.

The report then looks at the possibility that there are errors in the model response.  Some models may be running too hot in response to greenhouse gas forcing.  This is based on observations so it seems to me to be a bit of a circular argument, but again - I'm no expert.  The report discounts the impact of stratospheric water vapour as an explanation of the difference (and explains why).

The report sums up the likely reasons for the recent slow down in surface warming as a combination of internal variability and a reduction in external radiative forcing:
In summary, the observed recent warming hiatus, defined as the reduction in GMST trend during 1998–2012 as compared to the trend during 1951–2012, is attributable in roughly equal measure to a cooling contribution from internal variability and a reduced trend in external forcing (expert judgment, medium confidence). The forcing trend reduction is primarily due to a negative forcing trend from both volcanic eruptions and the downward phase of the solar cycle. However, there is low confidence in quantifying the role of forcing trend in causing the hiatus, because of uncertainty in the magnitude of the volcanic forcing trend and low confidence in the aerosol forcing trend.


What can be expected in the near future? More heat!


Next question is what can be expected in the next couple of decades. The section of the report dealing with the "hiatus" it is saying the world will get hotter at a faster rate than the previous 15 years, which makes sense to my way of thinking.  This is from the Technical Summary page TS-29 (my paras):
The causes of both the observed GMST trend hiatus and of the model–observation GMST trend difference during 1998–2012 imply that, barring a major volcanic eruption, most 15-year GMST trends in the near-term future will be larger than during 1998–2012 (high confidence)......The reasons for this implication are fourfold:
  • first, anthropogenic greenhouse-gas concentrations are expected to rise further in all RCP scenarios;
  • second, anthropogenic aerosol concentration is expected to decline in all RCP scenarios, and so is the resulting cooling effect;
  • third, the trend in solar forcing is expected to be larger over most near-term 15–year periods than over 1998–2012 (medium confidence), because 1998–2012 contained the full downward phase of the solar cycle; and
  • fourth, it is more likely than not that internal climate variability in the near-term will enhance and not counteract the surface warming expected to arise from the increasing anthropogenic forcing.

Here is a chart from Figure TFE.3 Figure 1 (page TS-96):

I want to figure out why the greyed out area is the "indicative likely range"  for all RCPs and the higher part of the model runs are outside the likely range.  I've looked at the text accompanying the above chart.
Projections of annual mean global mean surface air temperature (GMST) for 1950–2035 (anomalies relative to 1961–1990) under different RCPs from CMIP5 models (light grey and coloured lines, one ensemble member per model), and observational estimates the same as the middle left panel. The grey shaded region shows the indicative likely range for annual mean GMST during the period 2016–2035 for all RCPs (see Figure TS.14 for more details). The grey bar shows this same indicative likely range for the year 2035.

It says to look at TS.14 so here are the top two charts in Figure TS.14.  As always, click the charts to enlarge them:

Figure TS.14 (a) Projections of annual mean GMST 1986–2050 (anomalies relative to 1986–2005) under all RCPs from CMIP5 models (grey and coloured lines, one ensemble member per model), with four observational estimates (HadCRUT4, ERA-Interim, GISTEMP, NOAA) for the period 1986–2012 (black lines).

Figure TS.14 (b) as (a) but showing the 5–95% range of annual mean CMIP5 projections (using one ensemble member per model) for all RCPs using a reference period of 1986–2005 (light grey shade) and all RCPs using a reference period of 2006–2012, together with the observed anomaly for (2006–2012)-(1986–2005) of 0.16°C (dark grey shade). The percentiles for 2006 onwards have been smoothed with a 5 year running mean for clarity. The maximum and minimum values from CMIP5 using all ensemble members and the 1986-2005 reference period are shown by the grey lines (also smoothed). Black lines show annual mean observational estimates.
There is still more text for Figure TS.14 (b) above:
The red shaded region shows the indicative likely range for annual mean GMST during the period 2016–2035 based on the “ALL RCPs Assessed” likely range for the 20 year mean GMST anomaly for 2016–2035, which is shown as a black bar in both panels (b) and (c) (see text for details). The temperature scale relative to pre-industrial climate on the right hand side assumes a warming of GMST prior to 1986–2005 of 0.61°C estimated from HadCRUT4.
So now back to the text to see if I can find the details referred to.  I found the text at the bottom of page TS-47 in the section headed: TS.5.4.2 Projected Near-Term Changes in Temperature. It turns out that the projected rise is based on "observationally-constrained projections" and "predictions initialised with observations".
In the absence of major volcanic eruptions—which would cause significant but temporary cooling—and, assuming no significant future long term changes in solar irradiance, it is likely that the GMST anomaly for the period 2016–2035, relative to the reference period of 1986–2005 will be in the range 0.3°C to 0.7°C (medium confidence). This is based on an assessment of observationally-constrained projections and predictions initialized with observations. This range is consistent with the range obtained by using CMIP5 5–95% model trends for 2012–2035. It is also consistent with the CMIP5 5–95% range for all four RCP scenarios of 0.36°C to 0.79°C, using the 2006–2012 reference period, after the upper and lower bounds are reduced by 10% to take into account the evidence that some models may be too sensitive to anthropogenic forcing (see Table TS.1 and Figure TS.14). {11.3.6}

Maybe that explains why only the lower end of the model runs are considered "likely".  They are dragged down by the recent temperatures. I can idly speculate on whether this is reasonable or not, given the fact that there have been overshoots and undershoots in the past. But that's no value because I'm not an expert in climate models and in any case I might be barking up the wrong tree in my interpretation.  Maybe Gavin Schmidt or another  an informed HotWhopper reader who works with climate models can expand on how valid it is to constrain the projections to initialised models.  I recall reading (and writing) about an article that suggested that initialising models with recent observations only worked for a very short period of time, after which the models went back to doing their own thing.  Here's a direct link to the Tollefson article in Nature.

Disclaimer: I've pointed out a few times that I'm not even a climate scientist let alone a climate modeller, so the above should be read in that light.  What I wanted to do was to summarise the science as reported in the IPCC WG1 documents in relation to the so-called "hiatus", as well as think about what's going to happen in the near term - that is, over the coming couple of decades.

I'd love for other people to chip in with their thoughts on the topic.

21 comments:

  1. Badly written! When an interested reader cannot understand what exactly has been plotted.

    What is likely important is that the uncertainty is asymmetrical, the temperature difference between the mean prediction and the 99th percentile (the projection below which the temperature should stay with 99% probability) of the prediction is much larger than the difference between the mean and the 1st percentile. This could explain partially why the box is in the lower part of the spaghetti plot with all the model runs. However, the pink box in Figure TS.14 is below the minimum of the model runs, so this cannot be the full explanation.

    Furthermore, the lines of the box look to be very straight and are thus likely not directly based on the model runs.

    Thus I would say that the box is based on an expert assessment of likely warming rates for the near future. This could be that the experts estimate that the true uncertainty is larger as the uncertainty indicated by the spread of the various models. It is possible that all models make a similar error, because they all do not include certain processes or because the all model certain processing in a similar simplified way (physics parametrisations). Maybe they have a reason to expect that the the models are somewhat biased.

    Whatever it is. They should have written that clearly.

    ReplyDelete
    Replies
    1. Victor, thanks for that explanation. It makes sense that it could be an expert assessment - and maybe that's clarified somewhere in the report.

      I admit I'm having considerable difficulty with parts of the report, particularly the descriptions of charts and figures. The summary for policy makers is fine and I haven't had any real trouble with that so far. Parts of the technical summary are fine, but parts are more difficult to read.

      However, there are some descriptions, particularly of figures, in the technical report and the full report that are so dense they are almost impenetrable - to me anyway. The sentences are too long and convoluted, too many references to "elsewhere in the report" - which means searching for figure numbers to try to work out what's going on. And there seem to be a lot of charts and figures that are trends of trends and maybe even trends of trends of trends.

      That's just an initial take. Maybe if I keep plowing through it I'll start to make some headway.

      Maybe I just picked the hard parts first. I hope that's the case. I know a lot of people have put in a lot of hours and I don't aim to belittle their efforts. I imagine that if I were an expert in the area I'd understand it all very easily.

      I wonder if the IPCC has ever given any thought to including science communication specialists on the editing team - to help translate science-speak into easier language without losing the meaning or intent. Maybe they do already.

      Delete
    2. Ed Hawkins says: "Red hatched [which is the box we are discussing, vv] and black bar also relative to 1986-2005 and are expert judgement based on evidence in panel c which includes raw models, observationally constrained models and plausible future trends."

      So indeed expert judgement seems to be involved and explains the difference from the raw model output.

      Delete
    3. Thanks for that reference to the comment by Ed Hawkins. It sounds as if he may have more insight to offer later on.

      I suppose we have to cross our fingers and hope that the experts are correct and some of the models are overheated. We'll find out soon enough.

      At the risk of sounding like a DuKE - I don't place a huge amount of faith in the "observationally constrained" results.

      Delete
    4. If "observationally contrained" means that they give good model more weight, I think that that is a good idea. Many groups nowadays run climate models, computer power is relatively cheap. However, not every group is large enough to be able to build a good model.

      I do hope that they did not just look at the performance of the models with respect to the global mean surface temperature. I would expect that they consider more variables and also regional patterns.

      Delete
    5. I don't know, the Kosaka and Xie paper recently is observationally constrained and does a pretty good job of reproducing the recent surface temperature evolution. A rather elegant experiment actually.

      Delete
    6. Yes, Rattus, you make a good point. My comments are tainted by my ignorance and I don't mean to suggest the experts are wrong.

      I'll see what more I can find out about what "observationally constrained" means in this context - eg how far back in time etc.

      The "plausible future trends" could more open ended to judgement calls or there could be specific criteria for judging plausibility. Would be useful to get more insight into how these judgements are made.

      Delete
    7. Sou, I asked and finally got an answer, there is no help from science communication experts, certainly at the level of the underlying chapters.

      Might be an idea for the next time. Although that may make the process even more difficult.

      Delete
    8. Thanks for asking, Victor.

      Delete
  2. Interesting that the models appear to underestimate global temperatures during the positive phase of the Interdecadal Pacific Oscillation (IPO), but overestimate temperatures during the (current) negative phase of the IPO. I hadn't noticed that before.

    IIRC correctly only a handful of the models accurately describe the observed change in wind-forcing that is responsible for the variation in heat uptake by the ocean. Failing to do so would mean that the models as a whole might underestimate natural variability, although the observed global brightening through the 1970's to late 1990's and dimming therefafter certainly complicates matters..

    ReplyDelete
    Replies
    1. Good points, Rob. Does the Tsonis/Swanson paper touch on the IPO issue? With their "shifts" hypothesis?

      The models do look to be fairly close to observations for the most recent positive IPO - 1978-98.

      Delete
    2. Meehl et al (2013) investigates the role of IPO. (They also have acouple of earlier papers).

      It's not in the paper (at least I didn't notice it), but IPO influences ENSO. (strength and place).

      Delete
    3. Sou - I have not read the Tsonis/Swanson paper. It appears to be one of those "I see cycles" statistical matching papers. Dubious in other words. There is no physics behind such work, so it's of no value. It could be that they have detected the influence of the IPO, but I've got better things to do than read quack science.

      And if you take another look you'll see a lot of the model runs for the positive phase of the IPO (1977-1999), the 1990's in particular, are below the observed temperature. The Mt Pinatubo response might be an issue there though.

      Delete
  3. What is interesting is that your last 2 graphs do show some inconvenient truths. First, the observations are in fact right at the 5% confidence lower bound for the models. Second, the "likely range" is actually lower than the model ensemble by quite a bit, indicating that even the IPCC realizes that the models are running too hot. By the way, in fact, if one looks at 30 year, 20, 15, 20 10 year trends, it is still true that the observations are right at the 5% lower confidence bound.

    With regard to skeptics, "it is the scope of the writer that giveth the true light... and those that insist on single texts can show no thing from them clearly, an ordinary artifice of those who seek not the truth but their own advantage." Skeptics have been in the main saying just what I said in paragraph 1. You should be honest enough to recognize that.

    ReplyDelete
    Replies
    1. One thing I've noticed about fake sceptics. They can look at pictures (sometimes) but have a distinct aversion to reading words.

      Maybe a climate comic picture book could be developed for them.

      Delete
    2. Can't respond without name calling. The text you cite says the likely range was lowered 10% from the model range. This is an admission the models are running too hot.

      Delete
    3. And does this have any policy implications? Nope.

      Delete
    4. And I love the blatant misrepresentation of the AR5 text, which is clear (at least on this point).

      Anon writes:

      This is an admission the models are running too hot.

      But AR5 says:

      after the upper and lower bounds are reduced by 10% to take into account the evidence that some models may be too sensitive to anthropogenic forcing (see Table TS.1 and Figure TS.14). {11.3.6}

      Spot the difference?!

      Delete
    5. Anonymous accused me of being dishonest and when I replied that Anonymous didn't read what I wrote bleats something about name-calling.

      Here's another "name" - in addition to not taking the effort to read what's written, fake sceptics tend to be sooks.

      Delete
    6. "Sceptics" have, in the mass, been declaring models completely discredited since they first emerged. Many will be aware of Pat Michaels giving evidence to Congress in 1998 and lying like a trooper about that very subject. They have certainly not, in the main, been saying that observations are within the 5% uncertainty range. If that's what they're acknowledging now then they are not as ineducable as they've always seemed. That or they don't see what they're doing, whatever.

      The uncertainty allows for natural variability, such as is caused by ENSO and volcanoes. Given the La Nina's in 2008 and 2011, a few minor volcanoes, and low solar activity in the last decade it seems unlikely that observations will breach that 5% uncertainty limit.

      Delete
    7. If the 5% level is never undercut there would be a problem as well. The 5% level should be "breached" 5 % of the time. Otherwise the uncertainty estimate would be too large.

      Delete

Instead of commenting as "Anonymous", please comment using "Name/URL" and your name, initials or pseudonym or whatever. You can leave the "URL" box blank. This isn't mandatory. You can also sign in using your Google ID, Wordpress ID etc as indicated. NOTE: Some Wordpress users are having trouble signing in. If that's you, try signing in using Name/URL. Details here.

Click here to read the HotWhopper comment policy.