Monday, July 27, 2015

Bob Tisdale's latest conspiracy theory about ocean heat

Today Bob Tisdale has found a new conspiracy theory that he's promoting (archived here). It's much the same as all the others. From his ergonomic computer chair in his basement (is he that advanced?) Bob decided that another group of scientists must be fudging the data. Problem is, Bob doesn't understand the data or how to use it, let alone how the scientists analysed it.

A warning that this article is long. I enjoyed writing and researching it. The paper this article is based on is a great example of the sort of effort and thinking required to scope out and quantify the changes we're bringing about. Which is of critical importance IMO.


Progress in determining changes in ocean heat content


The paper Bob doesn't like this time is by Dr. Lijing Cheng  from the International Center for Climate and Environment Sciences in China, and co-authors Jiang Zhu and John Abraham. They have been looking to improve the record of heat content of the top 700 m of the ocean. The paper is called: "Global upper ocean heat content estimation: recent progress and the remaining challenges". As the title suggests, the paper describes recent progress in this regard, and the challenges that remain.


Bob Tisdale scoffs and whistles the WUWT hounds


Needless to say, pseudo-scientist Bob Tisdale uses this as another excuse to scoff at science. He understands little and isn't afraid to get out his keyboard to prove it. Bob wrote:
A week or so ago, a troll left a link at my blog (Thanks, David) to a supposed-to-be-alarming blog post about a new climate study of ocean heat content. 
To Bob, anyone who understands science is a troll. Which is only one of the weird things about denier blogs. Bob continued:
According to the study, a revised method of tweaking ocean heat reconstructions has manufactured new warming so that the top 700 meters of the oceans are warming faster than predicted by climate models. In other words, the “missing heat” is missing no more.

Contrary to Bob's claim, the paper made no reference to "missing heat". Also, notice the dog-whistle words - "tweaking" and "manufactured". There's more:
John Abraham, alarmist extraordinaire from SkepticalScience and The Guardian’s blog ClimateConsensus, was a coauthor. See Abraham’s post The oceans are warming faster than climate models predicted. Can anyone guess the goal of their study from the title of Abraham’s post?
John Abraham isn't from SkepticalScience.com. He's Professor of Thermal Sciences in the School of Engineering, University of St. Thomas, St. Paul, Minnesota. And I doubt that anyone at WUWT would guess the goal of the study from the title of the Guardian article. (That title was about the findings, not the objectives.) Or from Bob's article for that matter. Like Bob Tisdale, WUWT readers are conspiracy theorists and they would have "guessed" something different to the real reason for the study - which was to improve the record of ocean heat content and work out what still needs to be done.




An alternative reconstruction of heat in the top 700m of ocean


The introduction to the paper explains what Bob Tisdale ignores (and probably doesn't understand, dismissing as irrelevant fudgery):
Ocean heat content (OHC), as a key indicator for the Earth’s energy budget, provides a metric of the ongoing global warming (Rhein et al. 2013). Since more than 90% of the global warming energy is buried in the ocean due to its large heat capacity, it is a vital task for climate community to estimate the rate of historical OHC change (Abraham et al. 2013). However, due to the insufficiency of historical observation coverage, most of the existing OHC estimates can only be traced back to the 1960s and extend from sea surface down to ~700m. The existing estimates of historical OHC change by different groups show a substantial divergence and reveal uncertainties in its value....

...A large number of studies suggest that uncertainties during OHC calculations stem from the following issues:
  1. Systematic bias of the temperature measurements such as Expendable BathyThermographs (XBT) (Abraham et al. 2012a, 2012b; Cheng et al. 2014; Gouretski and Koltermann 2007; Gouretski and Reseghetti 2010; Levitus et al. 2009) and Mechanical Bathythermographs (MBT) (Ishii and Kimoto 2009).
  2. Insufficient coverage of in-situ ocean temperature observations, in both horizontal and vertical dimensions (Cheng and Zhu 2014a, 2014b; Lyman and Johnson 2008).
  3. Choice of key methodologies such as climatology (Cheng and Zhu 2015; Lyman and Johnson 2013).
  4. Quality control of the in-situ data. The accuracy of OHC estimation relies on proper treatments of the four issues listed above.

In this paper, we will briefly discuss these error sources and review the recent progress of solving them. Based on the proper solutions to those errors, a new assessment on historical upper 0-700m OHC change can be achieved. However, we note that we do not mean that this estimate is the best one compared with previous estimates, because challenges still remain during OHC calculation. The challenges will require more detailed studies in the future. 

A most challenging problem


In his Guardian article, John Abraham describes the problems in terms most people (though not Bob Tisdale) would understand. He wrote (my emphasis):
But how would you measure the ocean? How would you make consistent, long-term measurements that would allow people to compare ocean heat from decades ago to today? How would you make enough measurements throughout the ocean so that we have a true global picture?

This is one of the most challenging problems in climate science, and one that my colleagues and I are working hard on. We look throughout measurement history; first measurements were made with canvas buckets, then insulated buckets, and other more progressively complex devices. Many measurements were made along ocean passageways as ships transported goods across the planet.

As more ship travel occurred, and more measurements were made, the coverage of temperature measurements across the globe increased. So, over time, we say the temporal and spatial resolution increased. As these changes occurred, you have to be careful that any trend you see isn’t just an artifact of the resolution or the instrument accuracy. 


Bob's conspiratorial thinking


In contrast to the careful work and innovative approach of the scientists, Bob scornfully dismisses the study writing:
Bottom line: To manufacture the new warming, Cheng et al. adjusted, tweaked, modified (tortured) subsurface ocean temperature reconstructions to the depths of 700 meters starting in 1970.
Note his dog-whistles again - "manufacture", "adjusted", "tweaked", "modified", "tortured". Bob doesn't have a clue what the researchers did but he does know the words to get deniers all worked up in a lather of conspiracy theorising.

To illustrate that Bob doesn't have a clue, he put up a chart claiming:
My Figure 1 compares the “unadjusted” data versus the much-adjusted ocean heat content reconstruction from the NODC. It is not the data presented in Cheng et al. (I used the UKMO EN3 reconstruction for the NODC “unadjusted” data.  It used to be available through the KNMI Climate Explorer.) I’m providing Figure 1 to give you an idea of how horribly the data had already been mistreated to prepare the base NODC reconstruction.
Bob isn't showing what he thinks he's showing. (He makes this sort of mistake all the time.) UKMO EN3 can be obtained from the Met Office page, which describes it as:
The EN3 dataset consists of two products:
  • Observed subsurface ocean temperature and salinity profiles with data quality information, and,
  • Objective analyses formed from the profile data.

Why Bob used EN3, when it's been superseded by EN4, and has no further updates, I don't know. Maybe because he couldn't download EN4 from KNMI (it returns an error). EN4 is described in Good et al (2013) as a collection of profiles obtained across the global oceans. It's not described as a global reconstruction:
...a collection of ocean temperature and salinity profiles obtained across the global oceans over the period 1900 to present to which a series of quality control checks have been applied. Associated with this are monthly objective analyses with uncertainty estimates.

Here is the process used to process UKMO EN4 - from Good et al (2013)

Figure 1. Flow of processing performed on the data Source: Goode13



Compare that with the process followed by Cheng et al:

Figure 1. Schematic illustration of how annual mean ocean heat content is calculated from the ocean temperature observations (raw data). Source: Cheng15

As you can see, in addition to quality control, Cheng15 included steps of grid averaging and mapping. That seems fairly important if you are wanting to get a meaningful estimate of ocean heat content changes in total. The difference between the two is that EN4 is a set of profiles across the oceans. What Cheng and co did was to use ocean subsurface temperature profiles to generate a globally integrated reconstruction of historical ocean heat content change in 0-700 m. The profiles on their own aren't sufficient. To get an overall global integration, you need to grid and map the data using the information from the temperature profiles. (If you go to the EN4 website, the data is not provided as a globally averaged dataset. It's provided as profiles with different versions with different corrections.)

If you don't do the global integration, well Bob's own chart demonstrates the problem. He seems to think that the brown line is the "good" one (because he says it's "unadjusted") and the blue one is a "much-adjusted ocean heat content reconstruction from the NODC". I don't think he knows what he's doing. I've highlighted the region where his "unadjusted" data is quite unrealistic:

Source: WUWT

Compare Bob's chart with the one below from Cheng15, showing the averaged temperature globally for 0-700m. I've overlayed lines to show more clearly the impact of El Chichón (1982) and Pinatubo (1991). ENSO also shows up in the global record as shown below:

Figure 2. a). Estimation on historical ocean heat content change (represented by 0-700m averaged temperature anomaly in unit of oC) from 1970 to 2014 (red). The red dashed lines and the yellow shades represent the spread of the 50 calculations by using 60% randomly selected grids each time.
b). The Annual global averaged upper-ocean warming rates (in blue) computed from first differences of OHC700m in unit of oC/yr. Successively 9 years trends of OHC centered on each year are shown in dark green, and 45 years (1970-2014) OHC trend is shown in red dashed line. For comparison, the Nino3.4 index is shaded in light blue. The two volcanoes in 1982 and 1991 are marked by arrow and the OHC700m change at the same time is highlighted in purple. OHC700m change during an extreme ENSO event of 1997-1998 is highlighted in light green. Source: Cheng15

By the way, Cheng15 didn't use EN4. They used pretty much the same source data as EN4 for temperature profiles though.

Bob also complains about the charts showing both temperature in C and heat content in joules. Bob wrote:

But the trends listed on the graph are so minute, shown in ten-thousandths of a degree C per year, they’re likely losing some of their audience with all of those zeroes.
As Gavin Schmidt said in a pithy comment at realclimate.org, units don't make a difference - either large or small:
Changing a unit to have a small sounding number doesn’t actually change anything; neither the significance nor the accuracy. .... – gavin


Conspiracy criteria no. 1: Nefarious intent


Bob insinuates nefarious intent, writing sarcastically:
We can see Cheng et al. employed the cool-the-early-data method to increase the warming rate for the period of 1970 to 2005. [sarc on] They’re probably saving the warm-the-more-recent-data method for the next paper, which will then show the oceans warming even faster so the modelers can crank up climate sensitivities. [sarc off]
Bob Tisdale is a conspiracy theorist as you probably know. He accuses anyone and everyone of "fudging" data - that's criteria number 1 of conspiracist ideation - "nefarious intent". What you'll find however, is that Bob doesn't mention the process by which the global reconstruction was put together. If he's aware of it he hides it well. Given that he seems to think that it's not necessary to grid-average or map the temperature profiles, he's probably not aware of what's wrong with his article (not just his conspiracy theorising, but his basic premises).


Bob is amazed


Bob is amazed by science. Not really. He scoffs at science. No detailed and careful calculations for Bob. Give him some charting software and a bit of data he doesn't understand, and he will plot some charts and weave you a conspiracy theory. He wrote:
Isn’t that amazing? Using the “NODC-mapping” method, Cheng et al. show a warming rate for the global oceans of +0.0045 deg C/year for the period of 1970-2005, but the reconstruction for the same depths of 0-700 meters directly from the NODC website show a warming rate of only +0.0033 deg C/year. Now consider that the outcome of Cheng et al.’s new method of infilling the oodles and oodles of missing data in the depths of the oceans shows the global oceans warming at a rate of +0.0061 deg C/ year. In other words, for the period of 1970 to 2005, Cheng et al. have almost doubled the warming rate of the basic NODC reconstruction for the depths of 0-700 meters.
I don't know where Bob got his "doubled" from. Could he have worked it out using degree Celsius? Surely not. What number would he have got if he'd used Fahrenheit? [Sou: Silly me. The answer would be the same. That wasn't the problem as Alexander Coulter points out in the comments.] Anyway, the researchers themselves said using NODC-mapping was around 20% lower then their mapping method. They worked it out using joules/year.

The difficulties faced by scientists


Bob might think working out ocean heat content change is easy but it's not. Some of the difficulties faced by scientists are illustrated by this passage from Cheng15:
The existing estimates of historical OHC change by different groups show a substantial divergence and reveal uncertainties in its value. For instance, Intergovernmental Panel on Climate Change Fifth Assessment Report (IPCC-AR5) (Rhein et al. 2013) provided five independent estimates of historical OHC change from 1970 to 2010 by five different international groups: 74PW(Smith and Murphy 2007), 98PW(Ishii and Kimoto 2009), 108PW(Palmer et al. 2007), 118PW (Levitus et al. 2012), and 137PW(Domingues et al. 2008). Among these values, the minimum is as much as a half of the maximum, implying a large divergence of ocean warming rate assessment. 

What the scientists found in this study


Now readers here won't be amazed, but they will admire, no doubt. The scientists doing this research report their findings in some detail. Compare below what the scientists themselves write about their estimates, and the tone they use, with Bob's scoffing tone above. Bob in his Dunning-Kruger ignorance is superciliously confident that he's right and all the scientists in the world are wrong. From the paper:
In previous parts of this study, a new assessment on historical OHC is obtained using a new methodology. It is not clear that this is the best estimate of OHC because there are still some major challenges in OHC calculation. One of the major remaining challenges is how to infill the OHC data gaps, which requires the choice of mapping methods. Figure 4 provides the OHC time series calculated by applying NODC-mapping, where an objectively interpolation method is used to fill the data gaps. NODC-mapping results in (0.42±0.10)×1022 J/yr OHC trend, ~20% smaller than our estimate. The discrepancy suggests that a comprehensive examination on the existing mapping methods is required to understand their performances, and then to identify an optimal method. 

Below is Figure 4 from the paper, which compares the method used in this study (red line in a) with results using the NODC-mapping method (blue line in a) as well as results from CMIP5 climate models (grey in a).

Figure 4. a). Upper 0-700m ocean heat content calculated by using 40 CMIP5 models (historical run) in grey, with ensemble mean shown in black. CMIP5 results are compared with the observation-based estimate by using the strategies presented in this study in red, by using NODC-mapping shown in dashed blue. The two major volcanoes are marked by black arrow.
b). The Annual global averaged upper-ocean warming rates (in blue) of CMIP5 models (in grey, with ensemble mean in red) computed from first differences of OHC700m in unit of oC/yr. The upper-ocean warming rate calculated by using observations (Figure 2b) is attached for comparison Source: Cheng15

If you're wondering about the mapping method used by Cheng15, the authors describe the challenges, and the various methods used in other reconstructions, then explain the approach they used, which involves dividing the ocean into "ship area" and "Argo-ship" areas. I'll let the authors explain:
Alternatively, Cheng and Zhu 2014b proposed a simple method by dividing the global ocean into a Ship Area (with sufficient data coverage in the past 45 years) and an Argo-Ship Area (with sufficient data coverage only since Argo Era). The yearly mean OHC in the Ship Area can be directly calculated based on the available observations. However, in the Argo-Ship Area, because of the insufficient data, inter-annual ocean variability cannot be well represented, instead, only a linear long-term OHC trend is calculated. Therefore, the global-mean OHC is calculated by combining the OHC estimates in Ship Area and the obtained linear trend in Argo-Ship Area. By using this strategy, the long-term trend of global OHC can be estimated.

A difference between scientists and pseudo-scientists


Seems to me that while Bob Tisdale is happy to average temperature profiles only, regardless of where those profiles are and the sources of data, the scientists themselves are a lot more fussy and meticulous. They are also much less certain about the best approach. Rather than throw their hands in the air and wail "it's all too hard", they continue to work to improve the information, and identify where more work is needed.


From the WUWT comments


As expected, the WUWT-ers uncritically accept Bob's conspiracy theory and some elaborate on it. All they want is someone to tell them it's not warming, or if it is it's not humans, or if it is it's not bad, or if it is there's nothing to be done.

markstoval loves a good conspiracy. He comes across as a right wing authoritarian follower with his distrust of "government". He foolishly takes at face value Bob's comment about "models".. Bob doesn't understand climate models. He thinks they are weather forecasts. And he doesn't tell his readers what the paper itself says or what the researchers have done.
July 26, 2015 at 4:22 am
“Once again, the climate science community has shown, when the models perform poorly, they won’t question the science behind the models, they are more than happy to manufacture warming by adjusting the data to meet or exceed the warming rate of the models.
That is a great observation Bob. I don’t see how any climate “science” can be done using the “data” that is available from the government funded sources. I know that the planet has warmed from the depths of the Little Ice Age but I don’t think we know a whole lot more than that. Certainly anything from any division of the USA’s NASA has to be looked at with a lot of skepticism. I even understand that some agencies delete old data as they manufacture new data. (manufactured data?)
The theory behind CAGW (or climate weirding or whatever) has failed. The models have failed. The predictions have failed. Only fiddling with the data partially hides these facts.
Thanks for the continued vigilance Bob.

Bloke down the pub has a friend who doesn't know much about bodies of standing water:
July 26, 2015 at 4:24 am
In a discussion with a friend about sea temps, I asked him how many thermometers he thought he’d need to accurately measure the temp of an olympic sized pool. When he came up with a double digit figure, I pointed out how much bigger the worlds oceans are and how few thermometers were measuring down to 700m. His faith in his ability to claim that the oceans are warming was somewhat shaken.

opluso has an odd thought:
July 26, 2015 at 4:45 am
According to your Figure 4, the standard for “observational sampling coverage” is met if you sample a 1 degree bin area once per year.
Once per year? I wonder whether there is a difference in trends that result from using only rarely sampled sites vs only using frequently sampled ones? I suppose you also might have to control for latitude, season, etc. but that would just call the value of once-per-year data into question even more.
Have you posted a comparison of temp (or heat content) trends comparing once-per-year sample areas to areas that have more frequently gathered data? 

taz1999 wants to know how to measure the heat content of a glass of water. No, actually, they just want to measure the temperature of a glass of water:
July 26, 2015 at 8:04 am
What kind of lab equipment and process would it take to measure the temperature of a glass of water to this precision? 

Paul Homewood demonstrates no. 2 of the five telltale signs of denial - the logical fallacy. In this case the argument from incredulity or "I don't believe it therefore it ain't so":
July 26, 2015 at 5:14 am
I find the whole idea that they can measure “the effect of CO2″ in the deep oceans frankly laughable.
Even assuming there is a mechanism by which this effect can be distributed there, the changes in temperature would be impossibly small to measure. 

Walt D. is just a plain old denier who probably thinks climate science is a hoax
July 26, 2015 at 6:09 am
Sounds like climate change articles need to include an MGM style disclaimer.
All temperatures used in this study are completely fictitious. Resemblance to the actual temperature at any location, past or present, is purely coincidental.
They could also add:
This study was based on real data. The numbers have been changed to protect the climate models.

And Walt gets this unhelpful reply from conspiracy theorist Bob Tisdale
July 26, 2015 at 6:18 am
Thanks, Walt D. 

That's enough of that. The comments are more of the same. Scientists all have nefarious intent and are out to hoodwink deniers. Deniers are too canny to be fooled. They just know that the past 200 years of science is a hoax.


References and further reading



Cheng, Lijing, Jiang Zhu, and John Abraham. "Global upper ocean heat content estimation: recent progress and the remaining challenges." Atmospheric and Oceanic Science Letters (2015): 101. doi: 10.3878/AOSL20150031. (open access)

The oceans are warming faster than climate models predicted - article by John Abraham at The Guardian.

New Study Finds Quicker Upper Ocean Warming than Previous Thought - press release from the Chinese Academy of Sciences

Cheng, Lijing, and Jiang Zhu. "Artifacts in variations of ocean heat content induced by the observation system changes." Geophysical Research Letters 41, no. 20 (2014): 7276-7283. DOI: 10.1002/2014GL061881 (open access)

Good, S. A., M. J. Martin and N. A. Rayner, 2013. "EN4: quality controlled ocean temperature and salinity profiles and monthly objective analyses with uncertainty estimates", Journal of Geophysical Research: Oceans, 118, 6704-6716, doi:10.1002/2013JC009067 (subs req'd)

Biased Bob Tisdale is all at sea - HotWhopper - another recent instance of Bob Tisdale not understanding the data. He wrote something like four or five articles on this, each one more tortuous than the previous as he scrambled (in vain) to dig himself out of his hole

9 comments:

  1. Well done. Looking at WUWT gives one mixed feelings of schadenfreude and fall on floor laughing at the stupidity.

    ReplyDelete
  2. "I don't know where Bob got his "doubled" from."

    He's roughly dividing 0.0061 by 0.0033; that ratio doesn't change if the units change. However, he should also probably know that the trend in the paper is from 1970 to 2014, and the trend he calculated was until 2005. The NODC 0-700m temperature trend over the period until 2014 is 0.003929, so the difference between that and the Cheng et al. value is an increase of about 55%, not 100% as he suggests.

    ReplyDelete
    Replies
    1. What I find interesting from their paper is that they say, with respect to Figure 2, that there is multi-decadal variation in ocean heat content rate judging from the changes in the 9-year rate. But that is almost certainly due to their choice of filter; when you take moving derivatives you change the frequency spectrum for the new data. The derivative series amplifies certain frequencies and dampens others as a bandpass filter. As it were, this choice of filter amplifies periods from 7.8 years to 41.7 years, with a peak at 13 years. So it's not much a wonder why that variation—a roughly 15 year cycle—exists in the 9-year data.

      Delete
    2. Maybe "power spectrum" would be a better term here. The power given in a periodogram for those periods is amplified.

      Delete
    3. To just lay on comment after comment: it should be clear to anyone graphing the publicly available OHTemp data from NODC:
      https://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/basin_avt_data.html

      that what is available there (and what Tisdale used) is not what Cheng et al. made with the "NODC-mapping". I'm not sure exactly what that is, but either way the two series are visually very different from each other. You'd think Tisdale might want to ask for clarification on the differences between the NODC-mapping used by Cheng et al., and whatever was used for the publicly available data. As in: does "NODC-mapping" actually mean the exact method as used for the publicly available data, or no? Are the authors using an updated method that hasn't made its way into the publicly available data? Is the public data more updated?

      It would seem to me to not be useful to yet compare those trends.

      Delete
    4. I haven't checked, but AFAIK presenting their main result with "NOCD-mapping" result was solely to show how the choice of mapping made a difference to the result, so they used the same data throughout.

      NODC may well use a (slightly) different set of base data, different QC, different gridding etc so this result and NODC would not be expected to be identical.

      Delete
    5. "that ratio doesn't change if the units change" - you're right of course Alexander. I wasn't thinking.

      Delete
  3. July 1, 2016, The War on Science.... by John Abraham, published in Skeptikal Science dot com, kind of makes Abraham "of SKS" but good luck with the rest of your hatchet attack on Tisdale.....

    ReplyDelete
    Replies
    1. The SkS article was a repost from The Guardian. The SkS team is listed here. John Abraham isn't part of the Skeptical Science team.

      >>"good luck with the rest of your hatchet attack on Tisdale..."

      If there is anything you disagree with in the article, feel free to tell us what, why, and back it up with science. (I see a certain irony in your mentioning the War on Science. Just guessing, but are you one of the anti-science advocates that Shawn Otto writes about?)

      Delete

Instead of commenting as "Anonymous", please comment using "Name/URL" and your name, initials or pseudonym or whatever. You can leave the "URL" box blank. This isn't mandatory. You can also sign in using your Google ID, Wordpress ID etc as indicated. NOTE: Some Wordpress users are having trouble signing in. If that's you, try signing in using Name/URL. Details here.

Click here to read the HotWhopper comment policy.