There's not been much that was worth my writing about at WUWT lately. There hasn't been too much mocking of science - or no more than usual. And I haven't seen any
Russian Steampipes or
OMG it's insects rubbish this past few days either. There is a greed article (
archived here) - strenuously objecting to helping less developed and poverty-stricken countries deal with climate change. Par for the denier course. As the headline suggests, when it comes to deniers it's "
all about money".
Anthony Watts wrote two articles about why Steve Goddard was wrong when he claimed the US temperature data has been "fabricated" (archived
here and
here), which elicited some compliments but more protests. (During this Anthony's still promising to show why he thinks the US temperature record is all wrong. Part of his general protest at his previous paper on the subject which proved there was not much wrong at all with the US temperature record. He's still trying apparently without success so far.)
Which brings me to another thing I noticed, though not from anyone at WUWT, despite their apparent interest in the topic. While science deniers are busy denying the science, real scientists continue to do science. Victor Venema has a
couple of
articles about an initiative relating to a worldwide temperature record. The most recent article
is here. The
paper looks interesting and is available for comment, if you are an expert in homogenisation algorithms. The paper is called:
Concepts for benchmarking of homogenisation algorithm performance on the global scale and you can read it
here.
The work was an international collaboration, involving 19 scientists from the UK, USA, Australia, Switzerland, Canada, Germany, Italy, Spain and Norway. No-one from Asia or Africa. You'll probably recognise some of the names.
The abstract describes it rather nicely (my paras):
The International Surface Temperature Initiative (ISTI) is striving towards substantively improving our ability to robustly understand historical land surface air temperature change at all scales. A key recently completed first step has been collating all available records into a comprehensive open access, traceable and version-controlled databank. The crucial next step is to maximise the value of the collated data through a robust international framework of benchmarking and assessment for product intercomparison and uncertainty estimation.
We focus on uncertainties arising from the presence of inhomogeneities in monthly surface temperature data and the varied methodological choices made by various groups in building homogeneous temperature products. The central facet of the benchmarking process is the creation of global scale synthetic analogs to the real-world database where both the "true" series and inhomogeneities are known (a luxury the real world data do not afford us). Hence algorithmic strengths and weaknesses can be meaningfully quantified and conditional inferences made about the real-world climate system.
Here we discuss the necessary framework for developing an international homogenisation benchmarking system on the global scale for monthly mean temperatures. The value of this framework is critically dependent upon the number of groups taking part and so we strongly advocate involvement in the benchmarking exercise from as many data analyst groups as possible to make the best use of this substantial effort.
There's
a comment by
Blair Trewin from the Australian Bureau of Meteorology, who says the paper is sound and adds some suggestions.
If you're interested in records of global surface temperatures, this effort might be worth watching.
Update
(10:42 am 27 June 14)
I was half an hour early. Victor has just tweeted me
his latest article, in which he writes:
In our benchmarking paper we generated a dataset that mimicked real temperature or precipitation data. To this data we added non-climatic changes (inhomogeneities). We requested the climatologists to homogenize this data, to remove the inhomogeneities we had inserted. How good the homogenization algorithms are can be seen by comparing the homogenized data to the original homogeneous data. ...
...The main conclusions were that homogenization improves the homogeneity of temperature data. Precipitation is more difficult and only the best algorithms were able to improve it. We found that modern methods improved the quality of temperature data about twice as much as traditional methods. It is thus important that people switch to one of these modern methods. My impression from the recent Homogenisation seminar and the upcoming European Meteorological Society (EMS) meeting is that this seems to be happening.
Go to
Victor's blog to read more and, as a bonus, you can see a photo of the fat cat scientists, living in their towers made from elephant tusks :)