Search This Blog

Loading...

Tuesday, February 11, 2014

Roy Spencer's latest deceit and deception

Sou | 7:39 AM 39 Comments - leave a comment
Update: Today Roy Spencer responded below. I've now written another article explaining his deception a slightly different way.

Sou 21 May 2014



Sheesh! How's this for unadulterated chart fudging. Roy Spencer has put up a chart and proclaimed (archived here):
...the climate models that governments base policy decisions on have failed miserably.
I’ve updated our comparison of 90 climate models versus observations for global average surface temperatures through 2013, and we still see that >95% of the models have over-forecast the warming trend since 1979, whether we use their own surface temperature dataset (HadCRUT4), or our satellite dataset of lower tropospheric temperatures (UAH):

Let's look at how he's conned his denier fans.  Below I've plotted the CMIP5 composite mean against UAH and GISTemp using a 1981-2010 baseline, which is what UAH normally uses, and then I'll discuss what Roy Spencer has effectively done:

Data Sources: NASA, UAH and KNMI Climate Explorer

What he's effectively done is shifted the CMIP5 charts up by around 0.3 degrees.  In case you find it hard to credit that even a contrarian scientist would stoop so low, here is Roy Spencer's chart, with my annotations:

Adapted from Source: Roy Spencer
Not only did Roy effectively shift up the CMIP5 data, Roy Spencer effectively shifted down the UAH data in comparison with HadCRUT4.  This is the chart of UAH and HadCRUT4 using the 1981-2010 30 year baseline - compare that to Roy Spencer's deceptive fudge:

Data sources: UAH and Met Office Hadley Centre

How did he fudge?  What Roy Spencer has done is he's used a five year average - 1979-1983 to plot his data instead of the normal 30 year baseline.  Why did he pick 1979 to 1983 as the baseline?  The answer can only be that he wanted to deceive his readers.  Here is a comparison of UAH and HadCRUT4 using his shonky five year baseline compared to his normal 30-year 1981-2010 baseline.



That's not all that he's done.  If you compare the five year baseline chart I plotted with Roy's chart - his chart shows UAH lower than HadCRUT4 in every year.  That's not what my chart above shows, even using his shonky 5-year baseline.  Roy said he's using "running five year means" - which only shows the elaborate lengths he felt he had to go to in order to deceive people.

Anyway, to further illustrate Roy's shonkiness, here is the longer term CMIP5 and CMIP3 means vs GISTemp using the normal 30 year baseline:

Data Sources: NASA and KNMI Climate Explorer

The divergence only becomes apparent from around 2005.  Going by Roy's past behaviour, I shouldn't be surprised at him fudging the data to this extent, but I am.

From the comments


Mostly fake sceptics who are all too keen to buy into Roy Spencer's deception (archived here).

david dohbro says:
February 7, 2014 at 11:03 AM
unfortunately most of our decisions are emotionally based; very few factual. These decisions range from the simplest thing of “what to put on my sandwich today” to those on a much grander scale “let’s declare war to a nation”…


benpal says:
February 7, 2014 at 11:34 AM
Thanks for this update on the State of the Planet.

Jan says:
February 7, 2014 at 11:59 AM
Regardsless of who is right or wrong we must all be glad that the worst predictions seems to have failed.
I sometimes wonder if the alarmist share this relief, somehow I have the feeling that many of them want the temperatures to increase just to prove themselves right.

Denier Don Easterbrook says:
February 7, 2014 at 12:10 PM
Roy
In 2000, I downloaded the IPCC temp prediction to 2100 from the official IPCC website showing a 1 F warming from 2000 to 2010. That curve has long since disappeared from the IPCC website (surprise, surprise!) and the deviations of their projections from measured temps from 2000 are much, much smaller. My question is–how much of the deviation of the modeled curves from 2000 has been back-casted, i.e., their original predictions changed to match what actually happened. If that is the case, then their prediction record is actually considerably more miserable than your curves show.
Don

David A. says:
February 7, 2014 at 3:04 PM
Don, what document was the IPCC data from? Because all 5 ARs are available here:
http://www.ipcc.ch/publications_and_data/publications_and_data_reports.shtml#.UvVKDPldWa8
I wrote to you twice about two weeks ago, asking for the data source for one of your charts. You never replied. What happened to data sharing?

Pablo says:
February 7, 2014 at 1:09 PM
So 97.8% of climate models are wrong, somehow thats quite poetic. :)

Salvatore Del Prete says:
February 7, 2014 at 1:38 PM
Exactly, and as each month goes by they are more and more off.

Salvatore Del Prete is probably referring to this "not even wrong" prediction when he says:
February 7, 2014 at 1:39 PM
Don, if you read this I have been and continue to be in complete agreement with your climate assessment.

39 comments:

  1. Moving goalposts, shifting baselines...

    ReplyDelete
  2. What's striking is the bare-faced audacity of it. Spencer knows he's lying, he knows lots of people can see he's lying, but he knows his audience will love it. He also believes his god is in every audience so there must be some mental gymnastics going on there.

    It's also so old-fashioned. It even has UAH, which is sooooo not the dataset to use these days. Perhaps he's trying to promote it.

    ReplyDelete
  3. I'd love a step by step on aligning those baselines. I'm also curious, given the hind/forecast periods were, I thought different for CMIP3 and CMIP5 seeing how that comes into play. And I wonder if RealClimate and/or SkS will pick this up.

    ReplyDelete
    Replies
    1. CMIP3 is not that different to the next generation CMIP5 on a broad basis, from what I can see. I think the differences come in the resolution at the regional level. I also gather that the next generation again will be on faster computers with leaner programming, so should be even better.

      But I've not got any special insight - just going on what I've picked up around the traps.

      Oh, and within CMIP5, the different pathways (RCPs) don't diverge until around the middle of this century.

      Delete
    2. BTW aligning the baselines is just a matter of working out the average for the period of the baseline and subtracting that from the anomaly (if it's a different baseline). If it's the same baseline eg UAH reports with the 1981-2010 baseline, then you don't need to do any adjustment.

      I've provided the links to the data above. It's a very simple spreadsheet exercise.

      Delete
    3. Sou- I guess another way of putting it, is that I'd expect CMIP 3 and 5 to have as part of their product the baseline alignment to GMST or some other variable. There should be any room for you or Spenser to pick an alignment, because it would violate the basics of the CMIP products.

      I think one could take an opposite view as well... take the CMIP products and use a fitting routing to optimize the correspondence between some GMST product and the model by solving for the best baseline. Either way, I think I'd want to talk directly to Gavin or the like

      Delete
    4. This comment has been removed by the author.

      Delete
    5. Sorry Dave123, GCMs just don't work that way. It's a common misconception that GCMs start with an observed data record and extrapolate into the future but that just isn't so. Climate projections don't make any direct use of current or historical climate observations (though "historical" runs use observations of volcanic eruptions, GHG concentrations, and a few other things for boundary conditions).

      The CMIP5 suite does include some experimental decadal-scale predictions that are initialized from real data. But these are distinct from climate-change experiments or historical simulations.

      On the broader point the main difference between CMIP3 and CMIP5 is the move toward earth system models that include things like biogeochemical cycling. The difference in resolution isn't all that great.

      [Replaced original post with edits for typos and clarity.]

      Delete
    6. Thanks for the added info, Don. I mistakenly thought that CMIP5 might have higher resolution because the latest IPCC report goes into more detail with regional projections. I can see now that I got that wrong.

      Here's an article from realclimate from last April about regional projections. There are more articles. I gather than regional projections still have a very high amount of associated uncertainty.

      Delete
  4. I posted up about the deception and linked to here, but my post didn't get through the WUWT Hypocrisy Filter.

    ReplyDelete
  5. Sou, I couldn't resist grabbing and sharing the heart of this post.
    Thank you so much for all the work you do and the nonsense you expose.

    Cheers, CC

    http://whatsupwiththatwatts.blogspot.com/2014/02/roy-spencers-argument-depends-on-lie.html

    ReplyDelete
  6. For me, the most saddening part of Roy's blog post was this bit:

    "I am growing weary of the variety of emotional, misleading, and policy-useless statements like “most warming since the 1950s is human caused” or “97% of climate scientists agree humans are contributing to warming”,"

    I would ask Roy whether he thinks misleading, policy-useless rhetoric such as "95% of climate models agree: the observations must be wrong" is any better, especially as it is based on a (shall we say) highly nuanced presentation of the data.

    AFAICS nobody is claiming that the observations are wrong (other than with respect to well known sources of bias such as lack of Arctic coverage), so Roy's comment is basically a blatant straw man, designed as a rather uncharitable caricature of the modellers. Not the sort of behaviour one would expect from a scientist.

    ReplyDelete
  7. Sou, I don't really understand your post. You wrote:

    Why did he pick 1979 to 1983 as the baseline? The answer can only be that he wanted to deceive his readers.

    The UAH data begins in 1979 -- 1979 is the earliest for which they have data to compare to models.

    I agree that it is misleading to compare LT temperatures to surface temperatures. On the other hand, aren't LT temperatures supposed to be increasing faster than surface temperatures (the famous factor of 1.2)?

    But without knowing exactly where Christy & Spencer got their model data -- Roy doens't provide a link -- how do we know when the model data started? Was it also 1979?

    ReplyDelete
    Replies
    1. David, re your first point - UAH normally plots the anomaly against a *thirty* year baseline, from 1981 to 2010. As you can see from the charts I plotted, UAH and surface temperatures are closely aligned except for the first three years. From 1979 to 1981 inclusive, UAH has an abnormally high anomaly compared to the surface temperature.

      Therefore, by restricting his baseline to five years 1979-1983, instead of thirty years, Roy gives the false impression that UAH and the surface temperatures are out of whack. They aren't. He picked those five years because that's when UAH is abnormally higher than the surface temperature and the models. It's fudgery.

      Look at the charts where they are both plotted against a 30 year anomaly and you'll see they are closely aligned for the duration.

      Also, the problem isn't that of comparing lower troposphere temperature anomalies with surface temperature anomalies. Although they are measures of different parts of the system, they still track each other quite closely. You can see that illustrated in my article above as well.

      As for where they got their model data, IIRC it's from the same source as I did, as I linked above in the article. From KNMI Climate Explorer.

      The model data starts way back. I've plotted it against GISTemp in the article above going back to 1880. In fact, CMIP5 goes right back to 1861 and right up to 2100, so it's easy to work out the anomaly from the normal thirty year baseline - in this case, the 1981-2010 baseline. Roy didn't do that. He plotted everything from his abnormally short five year baseline, which was obviously deliberately chosen to make things look out of whack. When they aren't.

      I've provided links to all the data. Anyone can easily download it, work out the average for 1981-2010 and use that as the zero baseline and then work out the average for the short period of 1979-81 and plot the data again. You'll soon see what I mean. You'll get the same result as I showed above, but it might help you understand what I'm trying to explain.

      Oh, and on top of all that, Roy did something with five year rolling averages. That would explain why he gets UAH even lower compared to the others than if he'd just plotted against his five year baseline.

      Delete
  8. BTW aligning the baselines is just a matter of working out the average for the period of the baseline and subtracting that from the anomaly (if it's a different baseline).

    Sou, whilst I can see that Spencer has chosen a baseline at the peak of 1983 (rather than averaging as you have suggested) which is naughty, I couldn't help but notice that your "properly aligned" graph appears to start at below the average. Do you have some assumptions/calculations to explain why it appears that way?

    ReplyDelete
    Replies
    1. Greig, the "properly aligned" chart in the top animation starts below zero because I've plotted the anomaly from the 1981-2010 mean.

      The temperature is going up over the period. That means the zero'd baseline (the average temp for 1981-2010) is higher than the temperature early in the period and lower than the temperature later in the period.

      In other words, the data starts out with a negative anomaly, passes through zero somewhere between 1981 and 2010, then becomes a positive anomaly.

      Delete
  9. Since Spencer normalized that data, something that you did not do. Your entire analysis is hand-waving bull. Once the data is normalized the choice of baseline doesn't matter.

    ReplyDelete
    Replies
    1. How do you know he normalised the data?

      Delete
    2. Anon.

      I should have added, what exactly do you mean by your use of the term "normalise" in this specific case?

      Delete
    3. Anonymous - huh? The article shows how Roy Spencer *abnormalised* the data.

      (Dunning Kruger strikes again.)

      Delete
    4. I do wonder if Anon. knows what "normalise" actually means.

      Delete
  10. Sou, I'm sorry but I am still confused.

    UAH calculates its amonalies with respect to a 30-year baseline.

    But once you have all the anomalies, why can't you compare one 5-year period to another, via the 5-year running mean?

    If the average anomaly in one 5-year interval is (say) 0.30 C, and the average anomaly in a different 5-year interval is 0.50 C, then the second 5-year period is warmer than the first, as long as both were calculated from the same baseline period.

    I'm not convinced Spencer did anything wrong. But I'm not convinced he didn't either.

    ReplyDelete
    Replies
    1. I suggest you do as I suggested, David. That is plot the data yourself with a thirty year baseline and using the 1979-83 baseline. Then you should understand why using that particular five year baseline gives a distorted view of the situation.

      Although if my explanation and charts aren't sufficient, maybe you won't. I can't think of any better way to illustrate what I mean, other than what I've already written.

      Delete
  11. For example, how is Spencer's graph any different from the IPCC 5AR Figure TS.14(a), pg. 87, here:
    http://www.climatechange2013.org/images/report/WG1AR5_TS_FINAL.pdf

    except the IPCC used a different starting year? This figure also shows observations at the very low end of model projections.

    ReplyDelete
    Replies
    1. The IPCC graph is the same one Don C. Morton used here (his 2nd graph):
      http://archive.is/opPXp

      Delete
    2. I'm a Johnny come lately to this thread. Please do answer David Appell's last comments. I was referred to your article as a solid rebut of Spencer's post.

      To the person on the street, the IPCC graph (WG1AR5_TS_FINAL p. 87) shows most model projections were wrong from 2000 - 2012. This makes Spencer's interpretation seem correct.
      Why should I continue believing in models that are wrong?

      Delete
    3. I missed seeing David's last comment. Neither you nor David are used to looking at charts.

      If you're used to charts the differences between Spencer's shonky chart and the IPCC chart you refer to are as plain as the nose on your face.

      For one thing, Spencer shows a big difference between HadCRUT and UAH which isn't real, as my article demonstrates. You can see that for yourself if you download the data and plot it in a spreadsheet or charting package.

      For another thing, Spencer argues that the models " >95% of the models have over-forecast the warming trend since 1979", which is a load of crap. Even you, Matt, should be able to see that the observations in the IPCC chart you refer to are well aligned with the model runs right up to the early 2000s.

      As for why you should "continue believing" - that's not something for me to answer. As far as I'm concerned you can believe whatever you want. Pink elephants, angels, whatever.

      The facts are what the facts are. There is lots of science which, from your comment, you don't keep up with. For example, The models were run with a higher than actual solar radiation so they are running a bit hotter than observations. There've been recent papers written about this.

      I'd suggest wait a bit. It's only going to get hotter.

      (Your comment makes you sound like a hostile science denier. If you want your questions to be taken seriously, think about framing them differently.)

      Delete
    4. Roy Spencer's graph and the IPCC are different graphs as far as I can tell - does that answer the question?

      Technically the IPCC graph uses a baseline of 20 years as compared to Roy Spencer's 5 years. Also (which is the point of this article) the IPCC does NOT set all the measurements to the same value at the beginning point of 1986, which seems suspicious.

      The way Roy Spencer has presented his graph gives the appearance that the average of the models has been higher than the observations since 1983. But if you look at the IPCC graph you can see this is not the case, the observations have actually been WARMER than the average of the models from time to time since 1986.

      Delete
  12. Thanks for the quick reply. I understand what you wrote about 1979 and Spencer's statement that >95% of models over-forecast the warming trend. What about post 2000? If Spencer modified his statement and gave a starting year of 2000 and referenced only the IPCC published chart on page 87, would you rate the >95% of models are wrong statement true?

    Honestly, it feels like you employed a bit of misdirection. Spencer's statement may have been false as he wrote it, but for the 34 years between 1979-2013, he was wrong about 21 years and right about the most current 13. Why do the most current 13 matter more to me? I don't have a scientific reason, but they do.

    Your writing style (at least in the comment directed at me) makes you seem a bit like an elitist zealot solidly entrenched in dogma. In reality, I don't think that is true. Perhaps the framing of my comment ticked you off enough to treat me with faint hostility.

    I am not a denier. I'm just someone searching for information from trustworthy sources in his spare time.

    As I see it, whether you publish this comment or not, AGW supporters face a big problem.

    The IPCC chart seems to prove climatologists projections have been wrong since 2000 and aren't getting much better. Deniers pick up on that stuff, present it and convince regular people that most climatologists don't understand the climate well enough to make accurate projections. This causes doubt of the science behind climate modeling which is parlayed into mistrust of AGW supporters who seek to influence public policy.

    Regular folks may not have your insight and causal knowledge behind climate models, but we can look at a graph and see when forecasts don't match reality. Anyone who has looked at a 5-year market forecast and compared it to his portfolio has this skill.

    ReplyDelete
    Replies
    1. What models have not done is predict the exact pattern of solar activity, aerosol emissions and ENSO events which did actually occur in the last thirty years. This is not failure, since that is not the function of climate models.

      Hansen et al's model was run on scrounged flops on what passed for a supercomputer back then and crunched out one run for each of three GHG scenarios. Since the mid-90's models have been running thousands of variations for each GHG scenario, corresponding to normal variation in ENSO, solar, etc. The result is an average, with an envelope of good old give-or-take each side.

      The actual outcome hasn't fallen outside the envelope, so the models have coped with reality so far. There has been a preponderance of La Nina conditions and unexpectedly low solar activity, and yet no actual cooling. Which is very suggestive.

      Delete
    2. This article was about how Spencer fudged his chart. I have written other articles about models in general and the last few years in particular.

      I'm past caring about the impression I give people. I am most definitely *not* an AGW supporter. That label belongs to science deniers and people like Matt Ridley who want the world to heat up.

      What I want is for more people to wake up and take heed of the warnings.

      Matt, you still come across as someone who doesn't "believe" science and think it's alarmist nonsense or something and are looking for reasons to reject what's happening. Maybe you've just had a peep at a chart and have decided you can forget about it because you reckon "it won't be that bad". Depending on your age and where you live it might not be that bad for you.

      Thing is, if we don't start acting responsibly at the global level, it will be affecting people not just for the rest of this century, but for hundreds and probably thousands of years into the future.

      You can turn your back on it and say it's not your affair or you don't "believe" it or whatever. That's your business. Nothing I say will persuade anyone one way or another. All I hope to do is show that disinformers disinform and the tricks they use.

      If you've read anything about climate you'll already know there is also a lot more happening besides the rise in temperature. We are making a long term investment in a hotter world.

      Delete
    3. Matt writes.

      "To the person on the street, the IPCC graph (WG1AR5_TS_FINAL p. 87) shows most model projections were wrong from 2000 - 2012. This makes Spencer's interpretation seem correct.
      Why should I continue believing in models that are wrong?"

      and

      "The IPCC chart seems to prove climatologists projections have been wrong since 2000 and aren't getting much better"

      You seem to be following in the same footsteps of other deniers with the 'models are wrong' meme, and you have been cherry picking data to support your notion. You seem to be focussing on the meaningless and ignoring the meaningful. The models are NOT wrong, as you seem to believe, but yes they do have limitations, but those limitations need to be considered and understood. They should not be used to 'throw the baby out with the bathwater"

      So, go to page 63 of the WG1AR5_ALL_FINAL and you will see a number of histograms of Observed and simulated GMST trends in °C per decade, over the periods 1998–2012 (a), 1984–1998 (b), and 1951–2012 .

      Now what you will see is that on the first histogram, the models run much warmer than observed, the second histogram, the models are slightly warmer than observed, and the third, the models are spot on to the observed.

      This is a function of the models inability to predict the chaos apparent within short term datasets. This is entirely expected and normal, and should not be a detraction. But for you, it is. Why?

      The computer models cannot and will never be able to predict the short term chaotic variations in the climate. It would be like a computer being able to predict a coin toss. It will never happen. But over time, these chaotic variations will tend to cancel themselves out. So like over a 1,000 coin tosses, there will be on average 500 heads and 500 tails, give or take a few. The same is for the climate. So the figure on page 63 shows that over long periods, the models are in fact very good at projecting the climate, yet you focus on the short periods, like they are actually meaningful. You need to wipe that from your brain, just accept it, and focus on the long term trends, which is the aim of that page.

      Also, once you have finished reading that, I suggest that you read the whole document from the beginning and give your Morton's daemon the boot. Only then will you be able to understand the document in full.

      Delete
  13. Matt Lashley

    What about post 2000?

    It seems increasingly likely, as others point out above, that some of the forcings used for the CMIP5 runs done for AR5 were wrong.

    If the models are forced with observed solar, observed ENSO and improved, updated estimates for volcanic aerosols rather than those used for AR5, they come into much better agreement with observations post-2000 (Schmidt et al 2014). When the full effects of cooling from enhanced wind-driven ocean circulation are taken into effect (England et al. 2014), the agreement will presumably get better still.

    Then of course there's the very real possibility that the instrumental record is itself biased cool because of coverage lacunae (Cowtan & Way 2014).

    Closer still and closer.

    I am not a denier. I'm just someone searching for information from trustworthy sources in his spare time.

    I hope this helps dispel the confusion sown by contrarians such as Spencer, Christy, McNider, Watts, McIntyre, Montford etc.

    ReplyDelete
  14. we aligned all of the observations so that the *5 year average* at the beginning of the record (1979-1983) was the starting point. There is NO deception here, nothing nefarious, as you suggest. You can make your own graphs to suggest we did the same as you, but we didn't.

    ReplyDelete
    Replies
    1. You're a bit late to your party, Roy. As for deception, it's as plain as the nose on your face and your smile through gritted teeth.

      Perhaps you will explain why you chose a "five year average" at the beginning of the record and not a thirty year average. Perhaps you will explain why, since you did pick a five year average instead of a thirty year average, you picked that particular five year period when UAH was abnormally high such that it distorted the difference (as I showed above) . Why did you pick 1979-2004 rather than, say 2001 to 2005. Why did you move away from your normal baseline of 1981 to 2010?

      Your deliberate deception works as a talking point with deniers and people who are mathematically challenged (as you can see in the comments). The rest of us are onto your game.

      Delete
    2. The UAH record starts in 1979. So the earliest there'd be a 5-year average is the end of 1983.

      Delete
    3. Belated correction: Why did you pick 1979-2004 rather than...should of course read 1979-1983...

      Delete
    4. How is "aligning" the observations and model at 1983 justified? This would imply that all measurements were the same at 1983 which is so improbably to be impossible.

      Also how is using a baseline of 5 years justified - 5 years is too short a time to establish a reasonable average with such a noisy signal.

      Delete

Instead of commenting as "Anonymous", please comment using "Name/URL" and your name, initials or pseudonym or whatever. You can leave the "URL" box blank. This isn't mandatory. You can also sign in using your Google ID, Wordpress ID etc as indicated.

Click here to read the HotWhopper comment policy.