This was a month late at WUWT but better late than never I suppose. WUWT's current leading blog writer, Eric Worrall, has written about a paper published last month in Nature Climate Change (archived here). The authors, Mark Richardson, Kevin Cowtan, Ed Hawkins and Martin B. Stolpe, had a look at how temperature records are sampled. They found that slower warming regions are preferentially sampled, which means that observations are biased low.
The authors reported that after adjusting for biases, and using observations, the transient climate response (TCR) is 1.66 °C with a 5% to 95% range of 1.0 to 3.3 °C. This is consistent with that derived from climate models considered in the AR5 IPCC report.
Climate sensitivity and transient climate response (TCR)
First of all, what is climate sensitivity and the transient climate response? Kevin Cowtan explains this on his blog:
Climate models are used to estimate the likely range of warming we will see in the future for a given level of fossil fuel emissions. The size of the effect of human activity on global temperature is often summarized by a single number, the "climate sensitivity", which measures how much the Earth will warm in response to a doubling of atmospheric CO2.
The climate system takes time to respond to changes, and so different measures of climate sensitivity are used for different timescales. Most relevant to policy is the "Transient Climate Response", or TCR, which measures how much warming will occur over the span of a human lifetime.
Different estimates for TCR from different methods
The authors were investigating the reasons for the lower TCR reported from energy-budget models:
- 1.5 °C (1.0- 1.9°C) from Bengtsson and Schwartz (2013)
- 1.3 °C (0.9 -2.0 °C) from Otto et al. (2013) and
- 1.3 °C (0.9- 2.5 °C) from Lewis and Curry (2015).
2. Using Earth energy budget (like the three papers above) with the equation:
where ΔT is the observed change in temperature, ΔF is the change in radiative forcing, and ΔF2xCO2 is the forcing change for doubled atmospheric CO2.
Model-observation differences disappear when the data are treated the same
When we treat the models like the observations, we get a lower estimate of climate sensitivity. When we treat the observations like the models, we get a higher value. In both cases the models and the observations agree. But which is right?
Three factors explaining the difference between results
He then explained three factors that contribute to the difference between climate sensitivity estimated from models compared with that estimated from observations:
- Incomplete global coverage - 15%: The biggest factor affecting low observation estimates of TCR is the incomplete global coverage of historical temperature observations. If the historical coverage is applied to climate model outputs, it reduces the temperature change by about 15%.
- Surface vs air temperature - ~5%: The next biggest factor is using sea surface temperature rather than air temperature in the observational record. If you do that with climate models, the temperature change drops by a little bit under 5%.
- Sea ice edge changes - <5%: Lastly, where the sea ice edge has changed, blending of air and sea temperature in those regions also reduces the temperature change. The amount by which reduces it is the least certain and likely has the smallest effect, something less than 5%.
|Figure 1 | Change in near-surface air temperature from 1861-1880 to 2000-2009 seen globally (left), seen by typical HadCRUT4 data coverage over 2000-2009 (centre) and by typical HadCRUT4 data coverage over 1861-1880 (right). Typical coverage refers to cases where more than 25% of months within that period report data. Source: Kevin Cowtan|
When combined, these three factors reduce the temperature change in the climate model outputs by about a quarter. The different handling of the temperature data between the models and observations therefore explains almost all of the difference between the estimates of climate sensitivity from models and observations.
This is also explained in the NASA press release:
The Arctic is warming faster than the rest of Earth, but there are fewer historic temperature readings from there than from lower latitudes because it is so inaccessible. A data set with fewer Arctic temperature measurements naturally shows less warming than a climate model that fully represents the Arctic.
Because it isn't possible to add more measurements from the past, the researchers instead set up the climate models to mimic the limited coverage in the historical records.
The new study also accounted for two other issues. First, the historical data mix air and water temperatures, whereas model results refer to air temperatures only. This quirk also skews the historical record toward the cool side, because water warms less than air. The final issue is that there was considerably more Arctic sea ice when temperature records began in the 1860s, and early observers recorded air temperatures over nearby land areas for the sea-ice-covered regions. As the ice melted, later observers switched to water temperatures instead. That also pushed down the reported temperature change.
Scientists have known about these quirks for some time, but this is the first study to calculate their impact. "They're quite small on their own, but they add up in the same direction," Richardson said. "We were surprised that they added up to such a big effect."
These quirks hide around 19 percent of global air-temperature warming since the 1860s.
Ed Hawkins wrote on his blog:
Richardson et al conclude that previous analyses which reported observation-based estimates of TCR toward the low end of the model range did so largely because of inconsistencies between the temperature reconstruction methods in models and observations.
As observational coverage improves the masking effect will reduce in importance but will still remain for the historical period unless we can rescue additional, currently undigitised, weather observations. The blending issue is here to stay unless estimates of changes in air temperature can be produced over the ocean regions. The physical mechanisms for the different simulated warming rates between ocean and air temperatures also need to be further explored.
Implications for climate policy and targets
Ed Hawkins added a comment about the implications of the work (which was also mentioned in the paper):
Finally, if the reported air-ocean warming and masking differences are robust, then which global mean temperature is relevant for informing policy? As observed? Or what those observations imply for ‘true’ global near-surface air temperature change? If it is decided that climate targets refer to the latter, then the warming is actually 24% (9-40%) larger than reported by HadCRUT4.
And that is a big difference, especially when considering lower global temperature targets.
What the anti-science brigade think about this science ...
Over at WUWT, Eric Worrall wrote:
NASA researcher Mark Richardson has completed a study which compares historical observations with climate model output, and has concluded that historical observations have to be adjusted, to reconcile them with the climate models.He added:
Frankly I don’t know why the NASA team persist with trying to justify their increasingly ridiculous adjustments to real world observations – they seem to be receiving all the information they think they need from their computer models.
That's Eric's comment on a paper that compares models with observations, working out the details explaining differences in results. Without the observations there'd have been no means of comparing the model outputs with observations. That would please the anti-science crowd no doubt, but it wouldn't expand our knowledge and understanding of the world we live in. (There's no point in my making any remark about Eric's conspiracy theory about "ridiculous adjustments". It's ridiculous.)
From the WUWT comments
Most of the comments at WUWT are worse that the usual ridiculous. They are viciously anti-science and a display of rank stupidity. There were a couple of people who slipped past Anthony's censors, however most of the comments are from the utter nutter variety who crawled out from under a slimy rock to call for scientists to be put in jail. WUWT readers won't rest easy until the last proverbial book is burned, the USA has Trump-Pence-Putin for President, borders of every nation are shut tight and, in the USA, roaming vigilantes are gunning for everything that moves.
Bitter&Twisted could be a spambot for all the value he or she adds to the discussion:
July 24, 2016 at 8:01 am
They have no shame.
July 24, 2016 at 2:43 pm
I think you are completely misreading the paper. As I understand it the authors have taken the
output from their global models and processed the data in the identical way to what other researchers have done with the measured historical temperature data and have found good
agreement. Note that in this case there is no adjustment to the raw data and similarly no adjustments to the modelled data but rather only in the method used to calculate an average temperature from a climate model.
July 24, 2016 at 11:17 am
“Isn’t this sort of back asswards science? Fudge the data to fit a model?
Eh, it’d be backwards if that was what was going on, sure. But it’s not.
They’re saying that what we measured in real life are not exactly the same metrics as what had been reported from the models. We’d been partly comparing apples and oranges. For instance, the models can report their temperatures for the entire surface of the Earth, whereas in real life, our measurements in the Arctic have historically been a bit sparse. You may need to account for that, either by improving your measurements there, or by dropping that region out of what you report from the models.
So, what’s happening is just that they’re saying “hey, let’s make sure we’re doing a fair comparison”. And good comparisons are a good thing in science; you want to make sure that you’re being as true as possible to the data and what it represents.
You have to try pretty hard to skew this into bad science.
Bartleby falsely accuses the researchers of fabricating data. They didn't. It's people like Bartleby who make up stuff.
July 24, 2016 at 11:39 am
“You have to try pretty hard to skew this into bad science.”
No, you don’t. Fabricating data is not good science. It’s a far cry from saying the initial measurements are bad and there’s no way to correct that. That’s good science. Bad science involves saying we don’t have any data so we’ll make some up, and look, it fits our model! Surprise surprise! Aren’t we just brilliant? Who’d have thunk it!?
Eugene WR Gallun goes hell for leather misrepresenting what the scientists did. Is it deliberate and knowing misrepresenation, or is this the Dunning Kruger effect, or is it his world view preventing him from understanding, or is he this stupid all the time? He also shouts a lot.
July 24, 2016 at 6:16 pm (excerpt)No. That isn't what the scientists did. The observations and models match if the data are treated the same way. That is, for example, if the areas of the globe with no observations are masked out of the model results, and if sea surface temperatures are used not air temperatures. By doing this they found the model results and observations matched. That's how the researchers were able to pinpoint and quantify the differences. It's called science (something that WUWT fans know nothing about).
Here is what they are doing. They claim that past temperature data is flawed, failing to show enough warming. They specify what they think those flaws are. Then they take their models and apply the same flaws to the model output. Suddenly their model output matches what the measured temp data predicts.
They claim that their models only match the measured data if they DELIBERATELY FLAW THEIR MODELS!
They start out with the assumption that their models are FLAWLESS and claim to demonstrate that only if their models are DELIBERATELY flawed to match the flaws they claim exist in the temperature record will both make the same predictions.
In their twisted minds It then follows that this proves their models truly are flawless because they only produce poor results if you deliberately flaw them.
But the model have never ever worked!! They are intrinsically flawed within themselves. The flaws in the models have nothing to do with the flaws they claim exist in past temperature data. THIS IS REALLY JUST SIMPLE MISDIRECTION. THEY MISDIRECT BY SAYING — LOOK AT THE FLAWS IN THE TEMPERATURE RECORD! DON’T YOU DARE LOOK AT THE DIFFERENT FLAWS IN OUR FLAWLESS MODELS!
Eugene WR Gallun
Jim G1 thinks he's in a discussion about politics. He could be right (pun intended).
July 24, 2016 at 8:15 am
The corruption of the left knows no boundaries irrespective of the field of endeavor one examines.
Geoff wrongly thinks the scientists changed data. They didn't. He probably thinks his comment makes him look clever. It doesn't.
July 24, 2016 at 8:16 am
Sounds reasonable. If the modelling isn’t working then change the physical historical data to that point needed agree with the models.
Those NASA guys have a sense if humour. Release an April Fools Day article 3 months after April Fools Day.
G. Karst also wrongly thinks that data was changed. Sheesh. One naturally doesn't expect WUWT-ers to read a scientific paper, however, aren't any WUWT fake sceptics capable of reading a simple press release?
July 24, 2016 at 8:31 am
Isn’t this a clear admission that the models are clearly wrong and not fit for predictive purpose. Changing data to verify a model used to be regarded as academicfraud. All hypothesis can be validated using such reasoning. Can everything conceived – be true? GK
Justthinkin is another dim denier who just doesn't know how to think. What's the bet he'll vote to put Putin's proxy into the White House?
July 24, 2016 at 8:34 amnigelf is showing his TrumpPence colours:
These “scientists” can’t be charged with anything? Howzabout outright fraud and corruption? I mean. This guy admits it right in the “paper”!! Has science really sunk so low? To steal a phrase from SDA, this isn’t your grandmother’s science.
July 24, 2016 at 2:22 pm
I’m going to be so very happy when Mr. T cuts all this funding and reads them the riot act.
This is all about to come to a crashing end and it couldn’t be soon enough.
John Harmsworth might not have heard what's been happening in Siberia and Alaska the last few years (and months):
July 24, 2016 at 9:37 am
If they had no measurements at all the Arctic would be on fire!
ScienceABC123 is another one of many who can't understand plain English:
July 24, 2016 at 8:50 am
Translation: “Our models don’t match the historical record. So we must adjust the historical record to save our models.”
This time Nick Stokes tries to explain, in vain:
July 24, 2016 at 2:42 pm
No, despite what the headline says, there is no mention of adjusting data at all. In fact, they go the other way. They show that of you process model output temperatures in the same way that HADCRUT 4 averages global measured temperature, you get a similar result.
Owen in GA is another utter nutter conspiracy theorist:
July 24, 2016 at 8:53 am
I believe the data protection act makes it a criminal offense for government employees to adjust that data. We need an attorney general who will start prosecuting that act. We can start with the climate adjusters and move on to the EPA. They have all been guilty of changing the collected data to more closely match their video games and need to go to jail for it, particularly any that overwrite the original data in the process! If the original data is still available they have an out, but then their “products” need to be called something other than data, because none of it was ever observed.
The rest of the 180 or so "thoughts" are just a repeat of the above, though many are even stupider. You can read them here if you want to. Anthony Watts must be so proud of his pathetic band of dim, nasty, and rather disgusting science deniers.
References and further reading
- Article and data on Kevin Cowtan's blog
- Article on Ed Hawkins' blog
- Historical Records Miss a Fifth of Global Warming: NASA - Article from NASA
- Climate Sensitivity Paper Reconciles Data With Models And Suggests A Warm Future - article at Reporting Climate Science
- Is it time to freak out about the climate sensitivity estimates from energy budget models? - article by Victor Venema on his blog
- A FIFTH of global warming in the past 150 years has been missed by historical records due to 'quirks', Nasa study claims - article by Mark Prigg in the Daily Mail
Bengtsson, Lennart, and Stephen E. Schwartz. "Determination of a lower bound on Earth’s climate sensitivity." Tellus B 65 (2013). doi: http://dx.doi.org/10.3402/tellusb.v65i0.21533 (open access).
Otto, Alexander, Friederike EL Otto, Olivier Boucher, John Church, Gabi Hegerl, Piers M. Forster, Nathan P. Gillett et al. "Energy budget constraints on climate response." Nature Geoscience 6, no. 6 (2013): 415-416. doi:10.1038/ngeo1836 (pdf here)
Lewis, Nicholas, and Judith A. Curry. "The implications for climate sensitivity of AR5 forcing and heat uptake estimates." Climate Dynamics 45, no. 3-4 (2015): 1009-1023. DOI: 10.1007/s00382-014-2342-y (pdf here)
K. Marvel, G.A. Schmidt, R.L. Miller, and L.S. Nazarenko, "Implications for climate sensitivity from the response to individual forcings", Nature Climate Change, vol. 6, pp. 386-389, 2015. doi: http://dx.doi.org/10.1038/nclimate2888