Sunday, December 21, 2014

A silly poster by Pat'n Chip at AGU14 provides a learning experience


Anthony Watts has reported a poster (archived here). He doesn't comment on it. I doubt he understands it. Most of the people commenting at WUWT don't understand it either. And I don't blame them.

Now I'm no Tamino, however in my view, what Pat Michaels and Chip Knappenberger have done, ostensibly, is flawed from the outset and demonstrates that they don't understand the CMIP5 climate models.

Rather than show a chart with observations vs models (combined), what they've done is show a chart of trend changes. That's fine, but further down I'll explain why I believe the conclusions they draw are flawed.

Now they don't provide their data, only the poster. So you haven't got a lot to go on. But here's a copy of the chart. You can see a summary on the AGU14 website here. They didn't upload their poster to the AGU website for some reason, but they did make it available on the CATO website here. Most people who've an interest in climate can probably figure out what they've done and where they went wrong. (Click the chart to enlarge it.)

The annual average global surface temperatures from 108 individual CMIP5 climate model runs forced with historical (+ RCP45 since 2006) forcings were obtained from the KNMI Climate Explorer website. Linear trends were computed through the global temperatures from each run, ending in 2014 and beginning each year from 1951 through 2005. The trends for each period (ranging in length from 10 to 64 years) were averaged across all model runs (black dots). The range containing 90 percent (thin black lines), and 95 percent (dotted black lines) of trends from the 108 model runs is indicated. The observed linear trends for the same periods were calculated from the annual average global surface temperature record compiled by the U.K. Hadley Center (HadCRUT4) (colored dots) (the value for 2014 was the 10-mon, January through October, average). Observed trend values which were less than or equal to the 2.5th percentile of the model trend distribution were colored red; observed trend values which were between the 2.5th and the 5th percentile of the model trend distribution were colored yellow; and observed trend values greater than the 5th percentile of the model trend distribution were colored green.
Source: Michaels and Knappenberger, CATO Institute

What they've shown is interesting in a way, but their conclusions are invalid in my opinion. What they've plotted is:
  • The trends of multi-model mean of CMIP5 RCP45 runs with trends from 64 years to 10 years, up to 2014 (that is, a 64 year trend from 1951 to 2014 through to a ten year trend from 2005 to 2014)
  • 5 and 95 percentiles, which Pat Michaels describes as "the error bars are based upon the spread of the model results".
  • A plot with green, yellow and red circles, purportedly representing where the trend of HadCRUT4 observations are consistent with the trend of the multi-model mean. 

In other words, they've done a Tisdale.  (Bob Tisdale likes plotting derivatives.) When you read their conclusions you'll see that they've assumed that the trend of the models should always match the trend of observations, even over a period as short as ten years. What they don't seem to allow for is that the CMIP5 models are climate models, not weather models. Therefore the trend for shorter periods will be expected to differ from observations.

I say that because the internal variability - multi-year patterns like the PDO and the IPO (and ENSO), will show up at different times in different models and won't be timed to coincide with what happens. Random chance dictates that the internal variability in some models will coincide with observations some of the time, but that's just up to chance. Much of the time when a single model shows a short term hike in temperature observations may show a short term dip. Taking a mean of 108 models will cancel out much of the internal variability in the models. The observations on the other hand, will include internal variability. So for example, variability between models, such as from the different phases of the PDO, will generally be cancelled out by averaging the models. The observations will be affected by the phase of the PDO. It would be expected that the observed trend in the cool phase would be less than the trend in a warm phase.

Now extend the time of the trend beyond, say, 40 years and that will mean you can start to compare trends, if you want to. That's because most multidecadal variability would be smoothed out in the longer term trendline of observations. It's already mostly been smoothed out in the models by the very fact of averaging the 108 models.

In short, when you plot trends for periods less than around 30 or 40 years, you can expect the model mean to be different from observations. That means that what Pat'n Chip found was what is expected.

Remember Pat'n Chip plotted trends in temperature change for periods from 10 years long to 64 years long. What they ended up with is in the above chart. In the curve with the colours, the green circles show that the trend of observations is consistent with the trend of the models. That applies for anything longer than 40 years.  Exactly what you'd expect.

When it comes to shorter periods for the trend. That is, the trend over 10 years, 20 years, 30 years etc, you'd expect that the observations would be different from the multi-model mean. And that's what Pat'n Chip found.

So far so good. Where I believe Pat'n Chip went wrong is they jumped to the wrong conclusion. They put up these two charts:

Distribution of 20-yr temperature trends (left) and 30-yr temperature trends (right) from 108 climate model runs during the period ending in 2014 (blue bars). The observed 20-yr temperature trend (left) and 30-yr temperature trend (right) over the same intervals are indicated by the arrows colored to match the designations described in the previous figure.

What they concluded was that the models are "wrong". In my view, all they've shown is that twenty years is probably not long enough to show the climate trend. You can see that in the thirty year trend, the observations are closer to the model mean. They don't show a similar chart for longer periods, but what they found you can see up top. As the trend line extends over longer periods, the model mean trends are closer to observations.

To illustrate what I mean, I've plotted HadCRUT4 with a CMIP5 multi-model mean below. I've used RCP8.5, but it doesn't make much difference to their RCP4.5, because they track each other fairly well to the middle of this century. Click the chart to enlarge it.

Sources: Hadley Centre and KNMI Climate Explorer (CMIP5)

You can see that CMIP5 mean jumps up above HadCRUT4 from around 2005, which affects the slope. What you can also figure is that the 20 year trend line is not as good a fit as the 40 year trend line is. The R^2 for the 40 year trendline is 0.829. For the 20 year trendline it's only 0.37. (I know the R^2 isn't the only or best way to determine how well the trendline fits. The point is that if you want to determine the long term trend, you're better off with more than twenty years of data.)

If you enlarge the above chart, you can also see what I mean by the multi-model mean (lighter blue) smoothing out internal variability. The dips are volcanoes AFAIK (eg Krakatoa 1883, Agung 1963, Pinatubo 1991 etc), so they'll all be synchronised because the forcing is built into all the models. But other than that, the CMIP5 curve is much smoother than the observations. [Added by Sou a bit later.]

Now look at the CMIP5 trend. I've only put in the 40 year trendline. But it's not all that different from the forty year trendline for HadCRUT4. Yes, it's different - mainly because of the fact that recent temperatures have been lower than those modeled. And that's because the CMIP5 data only has observed forcings up until 2006. The forcings beyond that have turned out to be lower than those observed in regard to solar radiation in particular (a positive forcing), and probably higher volcanic aerosol forcings (which are negative) as well. Not only that, but the PDO has been in a cool phase, which has dampened global surface temperatures.

There's just one more oddity. You'll notice the shortest trends - around 10 years, are marked in "green" as good. Yet when you look at the observations, the most recent observations are quite a bit below the models. Now actuals are different to trends, I know. Still, surely that points again to the analysis being a bit wonky.

That's just my take for what it's worth. Feel free to disagree in the comments.


Addendum


For interest, here are the forty year trends to 2006, the year the models were plugged with actual forcings, after which they have only estimated forcings. [Added a bit later - Sou]

Sources: Hadley Centre and KNMI Climate Explorer (CMIP5)



From the WUWT comments


I don't think too many people at WUWT could figure out what was done, and I didn't see anyone who came to the same conclusion as me.


RoHa parroted what he was told to parrot, like an obedient trained parrot (with apologies to parrots, who are very friendly birds and much smarter than your average WUWT-er, particularly when it comes to climate and weather):
December 19, 2014 at 2:43 pm
So the models don’t actually work? They just stand there looking pretty? Shame we can’t have any photos to make our own observations of them.

Lots of people got all huffy about not being able to take photos of posters. I didn't see anyone point out that the AGU and many of the scientists (but not Pat'n Chip) made their posters available for downloading. David Schofield whined, in ignorance:
December 19, 2014 at 8:54 am
If people put up a poster they are openly publicising it. Don’t understand why you can’t photo them.

Others jumped in and said it was standard at scientific and technical conferences to not allow photographs.

Was n.n was talking about Pat'n Chip's prediction of an ECS of 2 degrees or something else? You can decide.
December 19, 2014 at 9:14 am
The system is incompletely or insufficiently characterized and unwieldy, which ensures our perception will remain limited to correlation between cause and effect. It’s like the scientific consensus has lost touch with the scientific domain. They really should resist urges to dabble in predictions.

JeffC reckons science is pointless and it's all too hard. Thinking about it probably makes his head hurt.
December 19, 2014 at 11:33 am
why ? its silly and arrogant to think we can accurately model a chaotic system … its a waste of time and money …

Robert of Ottawa  wrote:
December 19, 2014 at 10:29 am
That graph is really stretching it – 2.5th and 97.5th between .55 and MINUS 0.1 C/decade.

To which average joe replied:
December 19, 2014 at 12:29 pm
Robert – that’s because at the far right the trend is taken over a period of only 10 years, the shorter the time period of the trendline, the noisier the signal. The wide error bars at the right threw me too until I gave it some thought, then it made sense.

Resourceguy reckons certain words are not permitted in science:
December 19, 2014 at 10:37 am
From the abstract, the word “unfortunately” is inappropriate for professional statistical presentations. It is either acceptance or rejection of the null and there is no unfortunate this or that to it. Period

michaelspj (Pat Michaels) explains the multi-model mean and the error bars:
December 19, 2014 at 5:40 pm
Its the average of each of the IPCC’s 108 2014 ensemble models. And the error bars are based upon the spread of the model results. If you download our entire poster you will see that they are normally distributed and therefore we can do very straightforward tests on the mean output versus “reality

Jake J can't follow it, and I'm not surprised. It's not an easy chart for a layperson to decipher. I'm not certain I got it right either.
December 19, 2014 at 11:32 am
Terrible chart. Too many lines, impossible for a non-specialist to interpret or use.

rgbatduke wrote a very long comment as usual, of which I'll just post the first two paragraphs:
December 19, 2014 at 11:58 am
Interesting, flawed, and curious. Interesting because it quantifies to some extent the observation that the climate models “collectively” fail a hypothesis test. Flawed because it once again in some sense assumes that the mean and standard deviation of an “ensemble” of non-independent climate models have some statistical meaning, and they do not. Even as a meta-analysis, it is insufficient to reject “the models of CMIP5″, only the use of the mean and variance of the models of CMIP5 as a possibly useful predictor of the climate. But we didn’t need a test for that, not really. The use of this mean as a predictor is literally indefensible in the theory of statistics without making assumptions too egregious for anyone sane to swallow.
What we (sadly) do not see here is the 105 CMIP5 model results individually compared to the data. This would reveal that the “envelope” being constructed above is a collective joke. It’s not as if 5 models of the 105 are very close to the actual data at the lower 5% boundary — it is that all of the models spend 5 percent of their time that low, but in different places. Almost none of the models would pass even the most elementary of hypothesis tests compared to the data as they have the wrong mean, the wrong variance, the wrong autocorrelation compared to the actual climate. Presenting them collectively provides one with the illusion that the real climate is inside some sort of performance envelope, but that is just nonsense.

Tom quibbles with some of the conclusions of Pat'n Chip, which really and truly are very farfetched, not just for the reasons that Tom gives.
December 19, 2014 at 3:09 pm
Quoting from the poster:
From the recent literature, the central estimate of the equilibrium climate sensitivity is ~2°C, while the climate model average is ~3.2°C, or an equilibrium climate sensitivity that is some 40% lower than the model average.
….., it means that the projections of future climate change given by both the IPCC and NCA are, by default, some 40% too large (too rapid) and the associated (and described) impacts are gross overestimates.”
No.
I buy that 2 is about 40% lower than 3.2 degrees, but it does not work in reverse. 3.2 is 60% higher than the appropriate estimate of ECS, not 40%.

Eli Rabett from Rabett Run sneaks in under the mods' radar and makes a comment, but not about the silly poster from Pat'n Chip. It's about the reason for disallowing photographs.
December 19, 2014 at 4:08 pm
AGU has no copyright on any poster. What they are getting at is that posters and presentations occupy a peculiar netherworld in scientific publishing, being often preliminary in nature and unpublished in the dead electron or tree world. Sometimes, the good times, people use posters to provoke discussion, scientific that is. Thus, when someone photographs a poster and puts it up for everyone to see problems can arise.
An amusing, well not at the time, to the authors example of this was an encounter Eli had with Bruce Malamud and Don Turcotte at AGU. Turns out that they had given a seminar at York University, and somebunny put up the powerpoints, which was picked up by Tim Curtin, then Eli. Malamud and Turcotte had, in their own words, no idea that it was out there, and indeed it took then another couple of years to complete the study. Take a look at the link, and the links at the link to Marohasy and Curtin and the comments at both places.

That's all I will bother with for now. You can check the archive for other comments by Pat Michaels, commenting as michaelspj.

46 comments:

  1. I must admit that I'm slightly confused by what they've done. I had thought there were at least some models that would match the observed trends, but they seem to show that the observed trend always runs, at best, along the 95% contour. Grant McDermott (Stickman's Corral) has a post that compares model and observed trends, but I've promised my wife that I'll put my laptop down :-) . If I get a chance, I'll try and find it and post it later.

    ReplyDelete
    Replies
    1. It would be interesting to do the analysis with different model runs but not very informative. At least I don't think so, for the reasons I've stated.

      With the multi-model mean you average out all the variability in individual model runs, so it's not very meaningful when compared with observations that have inherent variability (built in).

      If you were to compare observations to individual model runs as trends, then the timing is bound to be out most of the time. When the observations go up from an ENSO, for example, a single model may go down - or up - or sideways. The timing of internal variability can't be expected to coincide with models. Not unless the model has been initialised to a recent year and even then it will soon run away and do its own thing, as far as random internal variation goes. (That's one thing that Bob Tisdale can't accept either. He thinks climate models are weather models).

      I'll be interested to read Grant's analysis just the same.

      Delete
    2. BTW I'm talking short term trends in the above. Once you get to longer term trends then there should be a fairly high correlation between models and observations.

      Delete
    3. I think it's this post, but the figure seems to no longer be there (well, it is, but the link doesn't work).

      Delete
    4. There was a Nature paper, I think about a year back, that made a similar comparison. They found that the observations are close to the 5% likelihood threshold compared to the model spread. The above post does not allow one to understand why Pat & Chip get a result that is just outside.

      It also does not matter that much, the multi-model ensemble spread is expected to be smaller than the true uncertainty. Many models and often all models, use similar methods to model the various processes in the atmosphere. There are, for example only 2 or 3 much used methods to compute the influence of showers (convection). Many of these ensemble members (model runs) use the same atmospheric or oceanic models, just in other combinations (if at all).

      How much larger the uncertainty is than the model ensemble spread is unfortunately somewhat arbitrary. Given the above mentioned reasons, it is not possible to compute this objectively. You would need how much different the results would be if you had a wider range of process models than we have. But by definition we do not have that.

      Delete
    5. I'm not sure whether people are still checking this thread, but I wrote an updated post on this models-vs-observations theme in honour of the M&K poster: Climate capers at Cato.

      Bottom line: M&K's arguments unravel once we properly account for confidence intervals (as hypothesis testing demands), or vary the starting dates for the recursive estimation.

      Delete
  2. I've just added another chart for interest. I added the 40 year trend lines to 2006, where CMIP5 had observed forcings - after 2006 forcings like CO2, solar etc were estimated in the CMIP5 models.

    ReplyDelete
  3. I'd take the existence of this poster at AGU as good news. It opens a dialog between a "luke-warmist" (Michaels) and more mainstream climate scientists. What I'd watch out for is this AGU poster morphing into "the Michaels and Knappenberger paper" (implying peer-review).

    ReplyDelete
    Replies
    1. Patrick Michaels is a signatory to the Evangelical Cornwall Alliance. Isn't he one of the ABC (Anything But CO2) bunch because "God" wouldn't let humans stuff up the world? Noah's Ark and rainbows and all that?

      Delete
    2. No. I don't think that's correct, Ceti. I've never found Patrick Michaels on any Cornwall Alliance list. Do you have a link?

      Roy Spencer and Ross McKitrick are on the advisory board or used to be. With the new website they only list the directors, not all the other people. Here's a link to the old website's list of advisory board members:

      https://archive.today/SpiY4

      And here's a link to the signatories of the Cornwall Alliance's evangelical declaration of climate science denial:

      https://archive.today/wUDu7

      The same thing from their old website:

      https://archive.today/h0D0

      Patrick Michaels has defended Roy Spencer's right to have ideas about "intelligent design". I don't think he's ever defended the Cornwall Alliance's Evangelical Declaration of Climate Science Denial though.

      https://archive.today/8AVHI

      Delete
  4. Looks to me like they have shown that you shouldn't use ludicrously short time periods - as deniers are accustomed to do - to determine trends. I wonder why nobody at WUWT picked that up.

    ReplyDelete
    Replies
    1. "ludicrously short time periods" I often point out that sea level has risen ~130 meters over the last 15 or 20 thousand years. 1/2 of N. America was under miles of ice back then. That's a lot of warming.

      15 or 20 thousand years is also a ludicrously short time period on a geological scale.

      Delete
    2. Are you sure about that? I realise you're talking 15000+ years, but if you look at Jerry Mitrovica's video (link below) you'll see that fish tanks along the coast outside Rome are still usable. We don't use that method for keeping stocks of fish nowadays, but we could just as easily do it now with those same structures as the Romans did 2000+ years ago.

      If this 130 metre rise you're talking about happened mostly or entirely before the extensive and expanding agriculture of the last couple of millennia, then it's not really relevant, is it? Regardless, I'd be glad to look at any references you might have. Since seeing this video a while ago, I've been taking a lot more interest in these sorts of studies.

      https://www.youtube.com/watch?v=RhdY-ZezK7w

      Delete
    3. Most of that did happen between about 18 and 6 thousand years ago.
      This might help.
      http://en.wikipedia.org/wiki/Current_sea_level_rise

      Delete
  5. A priori I don't think your criticism about averages holds: they plotted all the individual runs and show the 5% contour, I think. Which seems pretty valid to me -- in some ways that can be better than averaging and assuming a Gaussian error distribution in fact, given that it's resistant to outliers.

    The invalidity probably (in the colloquial sense) comes from the same issue tamino often harps on: if you look at a lot of trends, they aren't independent of each other.

    ReplyDelete
    Replies
    1. Thanks, numerobis. That's exactly the sort of comment I was hoping for. I'll think on it some. It looks as if I misunderstood what they were doing with the model runs - at least the envelopes, but after your comment I've read it again and I see what you mean (I think).

      What about the time period of the trend? Looking at the data it does seem as if it points to needing more than 30 year trends for comparison. At least 25 year trends if you just take the expansion of the envelope. Though in this case it took 40 year trends.

      What might be interesting would be to compare individual model runs against the rest of the model runs and see if any of them fare any better against the models, and against the observations. (In a sense, the observations could be considered as just another model, albeit a near perfect model - as near perfect as the observations themselves.)

      Another question. What value is a trend comparison compared to a comparison of plain observations? It seems to me that to expect a single "run" to align over a short period (less than 20 or 30 years) is expecting a lot. Expecting the observations themselves (not the trend) to fit within the model envelope is reasonable for shorter periods - like 15 years or so. (Very unmathematical of me, I know.)

      Delete
    2. I just read Victor Venema's comment up above, which explains some more. I think the boundaries are the extremities of model runs (which is different model runs at different points) - is that how others read it?

      Therefore the uncertainties for each model are not factored in - if that makes a difference. What I mean is that the boundaries are not probabilities in a real sense, they are just the extremes of the model runs for each trend period.

      And as Anonymous said below, there is also the uncertainty in observations.

      Delete
    3. I know enough to be dangerous about estimating order statistics ("you're in the tenth percentile" type questions). I know even less about time series.

      Delete
  6. The key failure is that they haven't plotted the uncertainty in the *observed* trends. If they had, it would be perfectly clear that the range for the obs overlaps the range for the models. I.e. there is no problem with the models. But that would contradict their predetermined conclusions...

    ReplyDelete
    Replies
    1. Isn't it somewhat ironic. These people normally complain day in day out about the quality of the observations. How enormously uncertain they are, that we cannot trust them one bit.

      When the trend in the observations is lower than the one of the models, however, they claim: the models are running hot. Does anyone know of a mitigation sceptic that has called for caution and stated that it is also possible that the observations are not right? I have never seen anyone.

      Makes one wonder whether their claim to be interested in better science is right or whether they simply are political activists.

      Delete
    2. They trust observations which fit their narrative, reject everything else; classic confirmation bias. I can't recall who wrote it but the new meme is "models are approaching a record level of divergence from observations". Might have been McI over at ClimateAudit.

      Delete
    3. This comment has been removed by a blog administrator.

      Delete
    4. Except that Spencer's comparison is at best flawed, but better described as fraudulent. It's not comparing like with like; it uses satellite "observations" that have a clear low bias; and it is restricted to tropical areas, conveniently ignoring the drastic warming at the poles. The epic fail is your duplicitous presentation of nonsense as fact.

      Delete
  7. Dang, I would have grilled them at their poster if I'd known it existed. It did not occur to me to search for their names in the AGU program, because it did not occur to me that they could have gotten anything past peer review even for a poster, given the astonishingly low quality of everything they've ever done. This particular success in passing peer review does not improve my opinion of their work; it decreases my opinion of the AGU review process.

    The "error" bounds are NOT *measurement* error bounds, even of the models.

    We expect observations to NOT match the model means. That's a much stronger statement than "we do not expect the observations to match the model means." NONE of the models produce a trend anywhere similar to the multimodel mean, because all the models are too good. Models do a good job of projecting the durations and temperature extents of the (pretty wild) variations in temperature, but do a really poor job of projecting the timings of those. Given historical forcing data, the models' multimodel mean is pretty close to observations nonetheless, though even then the RANGE of models is the proper thing to compare to the observations. But when models are given speculative forcing data, as Sou pointed out, the models' RANGE really must be what is compared to the observations.

    ReplyDelete
    Replies
    1. Almost all presentations are accepted at AGU. It is *not* meant to be limited to "reviewed science"; after all, the only people who see the presentations beforehand are the organizing committee, and they only see the Abstract. For the author(s), the best outcome from a poster is feedback that stops the author(s) from wasting months chasing red herrings or reinventing the wheel. There were almost certainly many posters (and maybe talks) at AGU that were less scientifically valuable than this poster, since they dealt with what is already known (even published!) and so don't really raise any new questions, which is what we scientists live for.

      What happens to this study after Pat and Chip absorb the feedback they got at AGU is what to watch for.

      Delete
    2. They will claim that they have peer reviewed science proving that the models are shite. That is the point of exercises such as this one, not to get feedback on work in progress.

      Delete
  8. When your data turns out dull or disappointing, differentiate until something emerges, then point at it. Sometimes it's done naively but often - as in this case - it's flim-flam.

    This may be the start of a strategic withdrawal from actual temperatures "not warming since" back to "it's all models and the models are wrong". It's a natural cycle usually signalled by the appearance of Pat Michaels in a very warm year.

    ReplyDelete
  9. Pat n Chip bob up with one of these analyses whenever there is a dip in the temperatures. Last time, they wanted to publish, but got caught out by the 2010 El Nino. I think there is a cycle there.

    ReplyDelete
    Replies
    1. Thanks, Nick. What I picked up from your charts is that trend endpoints matter. Timing is everything. The window of opportunity is short.

      I expect the boundaries there were calculated in the same manner as here.

      Delete
  10. I would have thought to get uncertainty intervals for the model runs you need a lot more than 108 - probably in the thousands, all run from randomly varying starting parameters with defined distributions. I'm guessing that the 108 model runs they used are not for probabilistic sensitivity analysis, but 108 separate scenarios. So they have shown something slightly different to the uncertainty in the models.

    Also this obsession the denialists have with the slopes of short runs is embarrassing. They're presenting slopes so short that the uncertainty is taking up their entire chart, there should be a hint there as to how stupid their work is.

    The little green uptick at the end should also give them a clue as to where this line of argument is taking them. Once 1998 and 1999 drop out of the range of near-term values, they're going to have this weird situation where there has been "no significant warming" for 18 years, but there has been significant warming for 15 or 16. Then they're going to have to completely drop the whole argument, and throw three years of denialist stupidity down the memory hole ...

    ReplyDelete
    Replies
    1. "Then they're going to have to completely drop the whole argument, and throw three years of denialist stupidity down the memory hole"

      I'll believe that when I see it...

      Delete
  11. I assume people know Michaels got funded for a long time by the Western Fuels Association, i.e., Powder River coal, starting no later than 1991, and perhaps earlier. He was also involved with TASSC, a Philip Morris project.

    ReplyDelete
  12. That looks a pretty silly poster to me.

    We know the models are running "warm" in recent years. You can see this simply by comparing the discrepancy between the model predictions and temperature observations directly. (Although, note that the obs still fall within the 95% ensemble interval if we use the Cowtan & Way series.)

    What M&K are doing, however, is creating a false impression that this discrepancy extends much further back in time than it actually does. This is an inevitable consequence of anchoring their trendline relative to a year (i.e. 2014) where the observations do fall below the model estimates. This fixed point distorts the relative slope(s) all along the remainder of the recursive series. In short, we appear to have a plain ol' case of cherry-picking.

    ReplyDelete
    Replies
    1. Grant, yes, that makes sense and probably explains why it appears the way it does.

      Delete
    2. I saw the poster and wanted to ask Pat about this (and various other aspects of the methodology), but he was not at the poster when I was there.

      Delete
  13. Isn't there also a discrepancy to be expected because the observations do not cover all of the polar regions where the greatest warming is predicted?

    ReplyDelete
  14. From the caption: "... 108 individual CMIP5 climate model runs forced with historical (+ RCP45 since 2006) forcings "

    So for the last 9 years, the forcings were not historical.Then they missed out on the deep and extended solar minimum, as well as the predominant La Nina conditions, these 9 years.

    ReplyDelete
    Replies
    1. ENSO (El Nino / La Nina) is not applied as a model forcing. From the point of view of a coupled climate model ENSO is a result, not an input.

      The recent tendency for La Nina will of course be included in the observations to which the models are compared.

      Delete
  15. Using GISS instead of HacDRUT, a marginally different baseline in 2014, Ill look at the 50 year trend, just because dividing by 5 is easy:

    http://data.giss.nasa.gov/cgi-bin/gistemp/nmaps.cgi?sat=4&sst=3&type=anoms&mean_gen=1212&year1=1964&year2=1964&base1=2014&base2=2014&radius=1200&pol=rob

    That gives me an anomaly of 0.83C, or 0.16C per decade, which agrees well with the models. Why am I getting such a radically different result to PatnChip?

    ReplyDelete
    Replies
    1. From the chart, that's not too different to the HadCRUT4 result reported by Pat'n Chip above, Millicent. See also the 40 year trends I did above with HadCRUT4, They are not that different either. I also did some plots a little while ago showing charts from 1979 with trends.

      http://blog.hotwhopper.com/2014/10/a-reality-check-of-temperature-for.html

      I think Nick Stokes and others nailed it. Pat'n Chip are taking advantage of the lower current temperatures while they can. If there were trend lines plotted for different periods (like the one I did above to 2006) the model trends would look very similar to the observations. Not only that but the model mean is very close as well, right up to around 2005.

      Deniers/disinformers will shift to some other argument once the observed forcings are plugged into models and/or when the temperatures have another hike. Probably blame it on magical leaping ESNOs or try to argue down sensitivity, like in this poster.

      I'd be surprised if there was a paper coming out of this. It doesn't seem to have enough in it for a paper. If there is to be a paper, they'll need to get a move on before it's too late.

      Delete
  16. What bunnies need to appreciate about Pat'n'Chip and Steve McIntyre is that they are very good at making it appear that they are doing something but really doing something else, which if you work very hard at it you can sometimes figure out. The something else, of course, tells the story they want. It is effectively misleading, but when you figure out what they really did they protest that anybunny should have read the fine print.

    Nick Stokes is very good at digging this stuff out, and then clowns like Carrick come down on him because, it is obvious that the shrink wrap was to be followed.

    ReplyDelete
    Replies
    1. Eli, what McI, et al. produce is arcane to me because I just don't have the stats chops to follow along. As such, I've never caught them squawking that somebunnies should have read the fine print. If you could produce a handy example I'd be interested to read it.

      When Stokes gets flak, I usually judge that he has caught one of them out and that they know it. Annoying that I don't always comprehend what Nick has nailed them for, but the strident non-rebuttals provide a good consolation prize in terms of my personal amusement.

      Delete
  17. Yes the graph does appear to be a cherry-pick, doesn't it? They are taking advantage of the climate models not modelling the hiatus very well. They need to do their experiment with some other decades for comparison.

    I was also puzzled by the claim that recent studies giving lower climate sensitivities have "dominated". I would like to see them back up this claim with some references.

    "Recent climate change literature has been dominated by studies which show that the equilibrium climate sensitivity is better constrained than the latest estimates from the Intergovernmental Panel on Climate Change (IPCC) and the U.S. National Climate Assessment (NCA) and that the best estimate of the climate sensitivity is considerably lower than the climate model ensemble average."

    ReplyDelete
    Replies
    1. Harry, what they probably meant was that the climate literature that has been filtered by denier websites is "dominated" - not the literature that appears in scientific journals, where it doesn't "dominate" - not by a long shot.

      Delete
  18. There are two reasons why Pat 'n Chip's graph looks convincing, but isn't.

    The first reason is that they plot against the HadCRUT4 data, which is known to have a cooling bias because it has missing stations around the poles (and elsewhere), but particularly around the north pole, which is the region which has warmed the most over the last 15 years. It would be good to see a plot comparing models to the Cowtan & Way datasets (on using spatial infilling from UAH satellite data and one using kriging). Most likely the plot against GISTEMP shows more of a match with CMIP5 because GISTEMP uses kriging to get complete coverage so will include the recent Arctic warming.

    The second reason the CMIP5 / HadCRUT4 trends plotted are different is because of the lack of controlling for external factors, of which ENSO (El Nino, La Nina, neutral) status, TSI (total solar irradiance) and volcanic aerosols. The most important is ENSO because we have been in neutral state for a while which reduces observed trends. If the plot was of straight temperature instead of temperature trend then you would get a reasonable impression by adding 0.1 to 0.2 degrees C of ENSO effect to global temperatures most recently to match the 1998, 2005 and 2010 effects, but the plot of trends makes it more difficult to just do this - you have to work out the weighting to give it.

    In fact the most important thing of all would be why the long-term (60 year) trends are different between CMIP5 and HadCRUT4. Possibly this is just because of the recent Arctic warming unaccounted for by HadCRUT4 and plotting against trends calculated using Cowtan and Way data (matched to HadCRUT4 averages over the period 1980 to 1990 say), would be very interesting.

    ClimatePete

    ReplyDelete
  19. Lovely cherry-picking on their part - if you were to plot ever increasingly short trends ending in 1998 you would get nearly opposite results. Not quite as dramatic, as CMIP5 models include forcings up through 1998, and hence don't have the divergence (which they don't acknowledge) between projected and actual forcings as seen in the P&C graph, but certainly there.

    Michaels and Knappenberger continue to produce deceptive graphs - they're lobbyists, that's what they do.

    ReplyDelete

Instead of commenting as "Anonymous", please comment using "Name/URL" and your name, initials or pseudonym or whatever. You can leave the "URL" box blank. This isn't mandatory. You can also sign in using your Google ID, Wordpress ID etc as indicated. NOTE: Some Wordpress users are having trouble signing in. If that's you, try signing in using Name/URL. Details here.

Click here to read the HotWhopper comment policy.