Let me illustrate. In all these charts except the one down the bottom, the CMIP5 projections are as for the IPCC report, using RCP8.5. That is, they don't have the actual forcings from 2006 onwards, only estimates. The actual forcings would have made the projections lower from 2006.
Below is a chart showing GISTemp vs CMIP5 model mean.
See also Addendum below.
Then the same with a five year running mean instead of annual averages - as you can see, by using the running mean you effectively lop off the latest observations.
That's why Bob Tisdale's chart has been drawn like that instead of using the annual data. Here is his actual chart:
If you look at the monthly charts, it shows even more starkly how the observations have gone from being lower to running ahead of the CMIP5 models. First GISTemp, which includes January 2016:
Then HadCRUT4, which goes to December 2015:
And here's a chart showing the February to January average, with the CMIP5 multi-model mean shown in white:
The above chart also shows the January 2016 anomaly, plus the anomaly for the 12 months ending January 16 and January 15. Click to enlarge as always.
So when Bob Tisdale writes nonsense like this, he's telling a big fat lie.:
Considering the uptick in surface temperatures in 2014 (see the posts here and here), government agencies that supply global surface temperature products have been touting record high combined global land and ocean surface temperatures. Alarmists happily ignore the fact that it is easy to have record high global temperatures in the midst of a hiatus or slowdown in global warming, and they have been using the recent record highs to draw attention away from the growing difference between observed global surface temperatures and the IPCC climate model-based projections of them.He's lying to his conspiracy crowd. The difference isn't growing. It's shrinking, and on the monthly charts, the difference has already gone the other way. The observations are higher than the modeled projections!
He does it again, writing:
It’s very hard to overlook the fact that, over the past decade, climate models are simulating way too much warming and are diverging rapidly from reality.
What's very hard to overlook is that Bob is a science disinformer by profession. A con man. A pseudo-science crank. Climate models are not simulating "way too much warming". It's the planet itself that's doing way too much warming. More even than what was projected.
It gets even worse. Bob put up a chart where he plotted the difference between HadCRUT and CMIP5 and wrote:
In this example, we’re illustrating the model-data differences in the monthly surface temperature anomalies. Also included in red is the difference smoothed with a 61-month running mean filter.The chart he put up is below. I've animated it, and added some annotations. I've highlighted the zero line. Above the line observations are less than projections, below the line observations are hotter than projections:
The greatest difference between models and reconstruction occurs now.
Bob tries to delude his readers by adding that big red line, hoping they won't notice that the model minus observations is now negative. In other words, with the latest observations added, the actual temperatures are hotter than what was projected.
Model projections are accurate when the actual forcings are included
As you know if you've been keeping up with climate science, the models were constructed back around 2005/06 so they didn't have the actual forcings of solar radiation, volcanic aerosols etc. When you plug in the actuals into the models, the difference between models and observations disappears. Here's a chart I posted that Stefan Rahmstorf put together, showing more clearly how the observations are consistent with the CMIP5 projections. In this last year they've been higher than projected:
Bob's conspiracy theory widens to scientists the world over
Bob insists on repeating his lie about the latest data, too. He knows he's wrong because lots of people have told him so, but he insists there's a vast conspiracy. He's now had to rope in the UK Met Office into his conspiracy of coordinated fraud, writing:
The impacts of the unjustifiable adjustments to the ERSST.v4 reconstruction are visible in the two shorter-term comparisons, Figures 7 and 8. That is, the short-term warming rates of the new NCEI and GISS reconstructions are noticeably higher during “the hiatus”, as are the trends of the newly revised HADCRUT product.He didn't mention the other main surface dataset, Berkeley Earth. If he had, he'd no doubt have added the Berkeley Earth people to his growing list of conspirators. Here's a table showing the trends from 1970 to 2015:
Bob's conspiracy is based on his false assertion that global sea surface temperatures should be identical to night time marine air temperatures. He's wrong. As well as the fact that night time marine air temperatures are air temperatures, not sea surface temperatures, there is a lot more of the ocean covered by buoys than by ships taking air temperature. Bob has been told all that but he continues to promote his conspiracy that scientists all over the world are fudging temperatures. He's a crank, a nutter as well as a climate disinformer. He knows he'd never get anyone but science deniers to buy his "books", so I guess he figures he has to keep lying to keep the funds flowing. Or maybe his motivation is different. He might just be a pathological liar - who knows.
Addendum with further illustrations
Here are some charts which further illustrate how Bob is deceiving his readers. They show variously the LOESS smoothing using (Bob's) 61 data points, the simple moving average, CMIP5 and the final chart includes observations (monthly). The period is on the charts. Click to enlarge them.
Added by Sou 1:25 pm 16 Feb 2016 AEDT
What I find fascinating is that Tisdale writes that "The models can't explain the warming from about 1910 to the mid-1940s" below a graph which very clearly shows that they can do exactly that. Can Tisdale not read graphs, or does he not know what "explain" means?
ReplyDeleteUsing a running mean is not wrong and probably makes it easier to compare the two. Last year was a strong el-nino year. Showing that it was just under the model predictions doesn't really vindicate the models. A strong el-nino year should be well above the model mean - just as it is in the Rahmstorf chart.
ReplyDeleteThere was a period of La Nina dominance, so the first El Nino year is not likely to be well above the model mean.
DeleteDid you miss what was in the above article? The point is that Bob's not using a running mean to make it easier to compare with the multi-model mean. He's using a running mean so he can avoid the comparison. He goes to great lengths in more than one place to hide the fact that the models and observations have recently converged - even in the models that do not have the correct forcings. (He doesn't alert his readers to that fact either, because his intention is to deceive, not to inform.)
DeleteRegarding the El Nino, JCH is correct. It's this year when it would have a stronger effect on surface temperature, much more so than last year. See here and here.
The models do not time ENSO. The ENSO future will be what it will be. For instance, there could be 2 more El Nino events in a row. The mean would end up far above the model envelope. Wouldn't that be a hoot? I suspect suddenly a whole bunch of people would become expert at averaging ENSO to zero while seriously discussing how the models do not time ENSO.
DeleteTisdale chooses CMIP uncorrected for forcings post 2006, then a long smooth period to further divert.
DeleteHe sticks to the one weighted presentation, when genuinely inquisitive people tend to present a number of graphs with input variety and processing indicated. In the spirit of discussion and elucidation...
Given the sheer amount of guff Tisdale pads his schtick with, and the amount of self-referential nonsense about his commitment to data analysis he includes, it's curious how little he actually explores data.
Well, not curious at all. He's an AGW rejectionist who can only post his anti-learning at a site dedicated to agnotology.
Re the point about models timing ENSO events, it's as JCH says. That means the multi-model mean (being an average) smears out things like ENSO events that would appear (at different times) in the individual model runs.
DeleteBob expects each and every model to model each event of internal variability at the precise time it occurs in the live model (Earth). Doesn't happen.
Lazyzej, would you prefer a running mean over, say, a LOWESS smooth?
DeleteWhy?
excuse my ignorance but has Bob basically run a "smoothing" algorithm against the dataset that simply chops off the recent warming?
DeleteIn short, yes, Tadaaa. Bob wrote that he used a 61-month running mean filter, which means that the hottest months toward the end are smeared (averaged) in with the previous 61 months.
DeleteYou can see it most clearly in his monthly chart with the 61 month filter as a big red line through the middle.
With a 61 month moving average, the last averaged number has to be 2 1/2 years (30 months) before the end of the time observed, and includes the average of all the 61 months through to January 2016 (or December 2016 with HadCRUT, because it's January data wasn't out). That is, the last point will be at July 2013, if the last observation is January 2016. That moving average data point will be the average of the monthly temperatures from February 2011 to January 2016.
The charts I've added in the addendum above show the effect of using Bob's 61 month running mean, compared to actual data (and compared to using LOESS smoothing).
A running mean is okay for removing noise to better see the signal in the full chart, but if you're interested in what's happening at the beginning and end of the data series, it's not much help. There are better ways of showing that (eg actual observations, or a LOESS filter for example).
(I used to use running means to help visualise what's happening in a chart until I was coerced by HW readers into getting off my butt and using LOESS smoothing. That way you get the curve going from the start point of the data series right through to the end point, without truncation. Nothing is perfect.)
lol, they are so ridiculously predictable it is laughable
Deleteconspiracy theorist never show their audience the whole picture, they simply rely on the believers to not bother asking questions - that's why the term "skeptic" is such an oxymoron with these loons
it is the same with the 911 "twoothers"
well done Sou
If the 61 month running mean flattens out the last 30 months would it not be more fair to compare that endpoint result to the CIMP5 value from 30 months ago? Or take the CIMP5 values and run the same filter on them? I am not a mathematician/statistician so please be kind if this is a mathematically unsupportable proposition.
DeleteThis comment has been removed by the author.
DeleteBernard,
DeleteI don't think I mentioned a preference regarding LOWESS vs running mean. I just don't think that a running mean is wrong. It does a good job at showing that temperatures are running low compared to the reference. Sue does a good job explaining why that is. I'm not sure how we could accept why it is so but not that it is so.
JHC, you say: "there could be 2 more El Nino events in a row. The mean would end up far above the model envelope."
DeleteIs that true? I always thought that ENSO just caused the temperature to dance around the secular warming trend. I had thought that the strength of the event (rather than the frequency of the events) affected the amplitude of the warming fluctuations.
A moving average (aka: a running mean) is the most basic form of a digital filter. For time series analysis, one must use a centered mean, that is, an average with equal number of points before and after the time point for which one does the calculation. Trouble is, a moving average has some bad qualities, such as aliasing the frequency of the average into the resulting series. Also, the impulse response, which would apply to events such as a volcanic eruption, gives a smeared result that moves some of the energy in the pulse to time periods before the actual event.
DeleteHERE is a graph which demonstrates the problem. When one applies a centered moving average to a temperature time series, events such as Pinatubo are "squished" in the result. Another way to look at the result is that the "energy" (or area under the spike) is the same in the averaged curve. There are better filtering techniques, but moving averages are easy to program, so they are popular.
"For time series analysis, one must use a centered mean, that is, an average with equal number of points before and after the time point for which one does the calculation."
DeleteI do wonder if that's a good idea for monthly data as it means one calendar month will be over represented. I don't see a problem with using a multiple of 12 months even if the centering won't be precise. (I should say I'm not an expert so I may well be missing something.)
"Also, the impulse response, which would apply to events such as a volcanic eruption, gives a smeared result that moves some of the energy in the pulse to time periods before the actual event."
But wouldn't this also be true with any smoothing method?
Sou I wonder what would happen to the CMIPS5 curve if one applied a 61 month centered average to it?
DeleteEven the statistically challenged numpties at wuwt could not object to that. We are simply eliminating moise in a 'dubious' prediction.
My best guess it would bring the prediction down to match the real temperature measuements!
Bert
I meant to say it would bring the CMIPS5 prediction down to the 61 month centre averaged of real temperature measurements.
DeleteThis would expose Bob's 'trick' for what it really is. Bert
When was it Bob Tisdale started using running means? Is it something he has always used or is it just now when he needs them to do some extra desperate denying?
Delete@Bert, that's probably what Bob's done. Over the full period 61-month average observations wander about generally following the 61-month average of the multi-model mean (a bit above or a bit below the mean). In the most recent 61 month averages, obs are a bit below.
Delete"I don't think I mentioned a preference regarding LOWESS vs running mean. I just don't think that a running mean is wrong. It does a good job at showing that temperatures are running low compared to the reference. "
DeleteI didn't say that you had "mentioned a preference regarding LOWESS vs running mean", I simply asked you which of the two techniques you would prefer, and why. Your response is a straw man: you answer a question I didn't ask and left unanswered a question that I did ask.
And since I put my original question to you I note that Sou has included an addendum that explicitly compares moving averages and LOWESS smooths. Perhaps you would available yourself of the graphs and reconsider my question about which technique is most appropriate, and why it is so.
This comment has been removed by the author.
DeleteYeah, I think I get it.
DeleteE. Swanson,
DeleteHERE is a graph which demonstrates the problem. When one applies a centered moving average to a temperature time series, events such as Pinatubo are "squished" in the result.
Indeed. Jan Perlwitz once explained something else about CMIP5 model ensemble means over at William Brigg's blog which has stuck with me:
When you take the monthly temperatures from 108 simulations and you average these then this is equivalent (assuming ergodicity) to an average of the temperature over 108 month equal to 9 years, i.e., it is low-pass filtered. Thus, you compare the variability of a low-pass filtered temperature anomaly that is derived from the 108 model realizations with the unfiltered only one realization that is provided by Nature. Do you see the flaw in such a comparison? It is like comparing the average of the values from 108 throws with a die with another throw and then concluding that the other throw was done with a different die, if it gives you something else than the average value.
Each individual model simulation is equivalent to the one realization provided by Nature. The individual model simulations show similar up and downs of the temperature as observed in Nature. Those up and downs are just not in phase with the observed temperature, because the short-term temperature variability is mostly unforced one due to chaotic dynamics. Or if a simulation appears to be in phase with the observations temporarily it will be by mere chance.
I think we can imagine in our heads what a 109-month centred moving average would do to the Pinatubo event in GISTemp. But what if we used CMIP5 to "remove" the external forcing signal from GISTemp and then smoothed the residual over 109 months? We could then add the smoothed residual back and have an observational timeseries with internal variability on par with the model ensemble, but with the external forcing perturbations not squished or otherwise smoothed out:
https://2.bp.blogspot.com/-RZHRWBTfMqE/VsbCliaBYkI/AAAAAAAAAoI/ZuLDasd7Vvo/s1600/GISTemp%2Bvs%2BCMIP5%2BRCP6.0%2B109CMA%2Bvs%2BLOESS%2B2015-12.png
LOESS is a nifty trick for getting smoothing all the way to either endpoint, but that does come at the price of increased uncertainty since the local regression is only using 50% of the data points to derive the estimate at either extreme.
Wolgang Pauli is known for his rueful "It's not even wrong" putdown of hopelessly muddled work. He was also quoted as saying "What you said was so confused that one could not tell whether it was nonsense or not" to Lev Landau.
ReplyDeleteIt goes without saying that the gulf between Nobel Laureate Landau and Bob Tisdale is unbridgeable.
Well spotted!
ReplyDeleteI suspect Dr John Christy might be using a similar deception in the charts he presents to the US Congress, I will check next time he does one. I did work out he uses a 5-year average to stop spikes in the UAH/RSS curve from appearing above the CMIP5 curve.
I caught another denier using arbitrary multi-year averages on WoodForTrees data to get a "downturn" when there wasn't one.
I think I know what these people do, they plot their graph using different parameters until they get one that fits their narrative.
That was pretty obvious decades ago!
DeleteYou must be too trusting, or something, not to have noticed this! Probably you are (or at least were) too nice a person. This personality trait can be a severe, even debilitating, problem when dealing with motivated FUD propaganda.
Christy's chart also adjusts the various graphs so that their trend lines all meet in 1979. This makes it look like models and observations were in perfect agreement at the start, but have increasingly deviated as time goes by.
DeleteThis seems odd to me, as it gives the impression that the models were already deviating from the observations before they were developed.
jgnfld.
DeleteNot sure what you mean, I knew Dr Christy was misleading the US Congress when I saw his tropo chart for the first time. He had moved the CMIP5 trace up to "align" it with the satellite trace.
Bellman.
DeleteYeah, he went to a lot of trouble to ensure the observation trace was below the model trace at all times.
I really wonder about him and Curry. This misleading stuff will be on the internet forever. They must not be concerned about their future career prospects. And what will their children and grand-children think when this stuff is brought up in the future?
I suppose they'll tell their children that they did what they did in order to prevent lunatic 'greens' from wrecking the economy by mandating luxury, intermittent energy while closing nuclear power plants.
DeleteI have never met a climate science denier who was not also a passionate critic of popular 'green' climate policy options.
Typically, when I tell a denier that I do actually acknowledge the consensus on climate science, a typical first reaction is *not* "Don't you know that the climate models diverge from observations?". Rather, it is something like: "Don't you know that solar panels don't work at night?"
yes, they are the ones that endlessly politicise the issues
Deleteany wuwt thread is awash with talk of watermelons (green on the outside red on the inside) quite a few never even bother to address the pseudo science
they seem to assume that physics has a political allegiance - and if it diverges from theirs, then it must be wrong
Harry Twinotter: I really wonder about [Christy] and Curry. This misleading stuff will be on the internet forever. They must not be concerned about their future career prospects.
DeleteBoth Christy and Curry are in their mid-60s. It's probably not much of a consideration.
Harry...
DeleteInternet humor doesn't always work. I really did try, though.
I've added an addendum with more charts to illustrate more clearly how Bob's technique is used to hide the recent warming.
ReplyDeleteA similar smoke and mirrors trick is going on at Ken's Kingdom where Ken has just introduced 12 month smoothing to extend his UAH 'pauses' see https://kenskingdom.wordpress.com/2016/02/13/the-pause-update-january-2016/ .
ReplyDeleteKen's Kingdom. I think he was the one who made up his own heat wave index so he could then say the Australia BOM's heat wave index was fake.
DeleteI put him right on that one, even though I was wasting my time.
I suspect the satellite "pause" people are going to run out of options soon.
DeleteThere is going to be a LOT of articles pointing that out the month that data is published.
So he's actually basing the trend line on the smoothed data? I also notice he defines a zero trend as anything less than +0.1C / century.
DeleteI wonder if Monckton will start using this approach when his pause disappears in the next couple of months.
Incidentally, I've been trying to use Ken's methods, and I get a pause should starting April 1997 not March as he says.
Bellman, are you using UAH beta 4 or the latest beta 5? Every succesive beta lowers the trend and also extends the pause by a month or so. Anyway it will all end in tears next month unless Roy Spencer can get beta 6 out in time to save the day.
DeleteNo, I checked and I was using beta 5. Beta 4 would start the pause in May 1997, but that only goes up to Dec 2015.
DeleteI wouldn't want to press the point to strongly as it's entirely possible I've made a mistake in my calculations, or I'm misunderstanding how he is doing the calculations. It really doesn't make much difference as it's a pointless exercise to pin a trend to a specific month.
The denier tactic once the faux "pause" disappears is to insist it will return as soon as the next la Nina happens.
DeleteWUWTiots don't seem to be bothered by not having an actual physical event that caused the "pause" (mostly because the event that allows the mathturbatory "pause" is the 1998 el Nino, and they don't want to admit that). The "pause" is not a thing that happens in the Real World(tm) even in their minds. It's just a political argument, not a physical one, so it's (supposedly temporary) disappearance is not a hindrance.
There have been a number of fundamentalist cults that pick some date at which the Second Coming will happen, and the world will end. Surprisingly, none of these cults seem to be bothered when the date passes and the world is still here. Likewise, denialist cultists won't be concerned when the "pause" fraud stops. It will return at at time of the Republic's greatest need.
Furthermore, comparing observations with ensemble mean an not indicating the spread of the models is highly misleading. Temperature is not expected to track the mean but is in a sense one of the model outputs with far more UPS and downs than the mean has..
ReplyDelete(please mentally filter out misspellings from the above (sigh))
DeleteThis issue is important for communication. The Earth only gets one shot at producing a run of values. Comparing the Earth's run to the ensemble mean is as valid as comparing any other individual run to the ensemble mean. Effectively NO individual run "agrees with the models" if we take tracking the ensemble mean as the criterion.
DeleteThe ensemble mean of modeling throwing a dice is 3.5
DeleteNo dice throw will match the ensemble mean, it will always be between 15%-85% different. Even with a loaded dice.
izen
Ed Hawkings has a good page on this with excellent charts
Deletehttp://www.climate-lab-book.ac.uk/comparing-cmip5-observations/
Off topic, but apparently after tossing and turning most of the night, our Anthony finally came up with an angle. The detection of gravity waves was a triumph for fossil fuel power.
ReplyDeleteTo prove the point beyond doubt he puts up electricity supply breakdown charts for the States where the two detectors reside.
One of which is Washington.
Washington gets >70% of its electricity from hydropower.
In fact note that according to the charts, less than half (47.2%) of the power across the 2 states comes from fossils, the rest is nuclear or renewables.
A triumph for low-carbon generation.
I need a new irony meter.
Watts is a stereotypical example of motivated reasoning.
Deletethanks Sou – can you share a link to the source for Rahmstorf's chart? Couldn't find it in search and interested in details / what context he shared this.
ReplyDeleteSorry. I must have been in a rush. It's from a tweet by Stefan Rahmstorf, updated from Mann15 article in Nature's Scientific Reports.
Deletethanks! great post.
Delete