Wednesday, January 21, 2015

Tricks used by David Rose, denier "journalist", to deceive

Sou | 3:41 PM Go to the first of 44 comments. Add a comment

This is just a short article to show the journalistic tricks that professional disinformers use. It's excerpts from an article by denier David Rose, who is paid to write trash for the Mail, a UK tabloid of the sensationalist kind. He'd probably claim that he's just "doing his job". His job being to creates sensationalist headlines and not bother too much about accuracy, but try to do it in such a way as to stop the paper ending up in court on the wrong end of a lawsuit. Just. (The paper probably doesn't mind so much getting taken to the Press Complaints Commission. )

Here is what David Rose wrote:

The Nasa (sic) climate scientists who claimed 2014 set a new record for global warmth last night admitted they were only 38 per cent sure this was true.

First of all notice the use of the word "admitted" - as if it was something that the scientists were forced into, whereas in fact that they provided all the information in their press briefing. Notice also that David doesn't even know how to spell NASA. Then notice his straight up lie. It's not true. David has taken one number and used it out of context.  The 38% number is the probability that 2014 is the hottest year compared to the probability that 2010 and other hot years are the hottest. 2010, the next hottest year, only got a 23% probability by comparison. Here is the table showing out of 100%, what the different probabilities are:

You can see how David misused the 38% number. In fact the odds of it being the hottest year on record are the highest of the lot.

What is David's next atrocity:

In a press release on Friday, Nasa’s (sic) Goddard Institute for Space Studies (GISS) claimed its analysis of world temperatures showed ‘2014 was the warmest year on record’.
The claim made headlines around the world, but yesterday it emerged that GISS’s analysis – based on readings from more than 3,000 measuring stations worldwide – is subject to a margin of error. Nasa (sicadmits this means it is far from certain that 2014 set a record at all.

See how David Rose distorts things. How he uses rhetoric, abusing words like "emerged" and "claim" and "admits". He is flat out lying about the "far from certain". He just made that one up. It may not be "certain", but it is much more certain than "far from".  And it is more "certain" that 2014 was the hottest year than that any other year was the hottest year.

If David Rose were arguing that you beat your wife, even though you don't, he'd probably write it up as:

The so-called scientist claims that he doesn't beat his wife. He admits that he cannot prove he doesn't beat his wife. However this journalist can show that it has emerged that his claim is subject to a margin of error.  95% of wife-beaters deny beating their wives.

And I doubt he'd add the confidence limits to the 95% number!

David Rose continues his deception writing:
Yet the Nasa (sic) press release failed to mention this, as well as the fact that the alleged ‘record’ amounted to an increase over 2010, the previous ‘warmest year’, of just two-hundredths of a degree – or 0.02C. The margin of error is said by scientists to be approximately 0.1C – several times as much.

That section by David Rose contains the same misprint of NASA (as Nasa), plus the same journalistic tricks of rhetoric, as well as a lie. The margin of error of the annual averaged global surface temperature is described in the GISS FAQ as ±0.05°C:
Assuming that the other inaccuracies might about double that estimate yielded the error bars for global annual means drawn in this graph, i.e., for recent years the error bar for global annual means is about ±0.05°C, for years around 1900 it is about ±0.1°C. The error bars are about twice as big for seasonal means and three times as big for monthly means. Error bars for regional means vary wildly depending on the station density in that region. Error estimates related to homogenization or other factors have been assessed by CRU and the Hadley Centre (among others).

If the press release didn't include any confidence limits, then where did David Rose get his numbers from? you ask. That's a very good question. It turns out that NOAA and NASA held a press conference, during which they showed some slides and explained the confidence limits, among other things. So David Rose was being very deceitful, wasn't he. Which isn't a surprise.

What bit of deception does he swing to next? Well here it is. You be the judge:
As a result, GISS’s director Gavin Schmidt has now admitted Nasa thinks the likelihood that 2014 was the warmest year since 1880 is just 38 per cent. However, when asked by this newspaper whether he regretted that the news release did not mention this, he did not respond. Another analysis, from the Berkeley Earth Surface Temperature (BEST) project, drawn from ten times as many measuring stations as GISS, concluded that if 2014 was a record year, it was by an even tinier amount. 

More rhetorical tricks using words like "admitted". More deception by David Rose tabloid denier extraordinaire. When and how and where did David Rose ask Gavin Schmidt the question? I don't know. It looks as if it was via an accusatory tweet of the type "have you stopped beating your wife", like this one:

Yet Gavin Schmidt did respond to David Rose, so it was David Rose who told the lie:

That's about it. I'll leave it to you to decide who is the grand deceiver.

I'd not trust David Rose, denier journo, with a single fact.  It is alleged that he is a master of deception. He'd probably try to claim he is just doing his job.


  1. Sou -

    Let we forget other greatest hits from Rose:

    ==> "Like Prof Curry, Prof Jones also admitted that the climate models were imperfect:"


    ==> "The data does suggest a plateau, he admitted,"


    ==> "and this despite the fact that Phil Jones and his colleagues now admit they do not understand the role of ‘natural variability’.


    ==> "Even Prof Jones admitted that he and his colleagues did not understand the impact of ‘natural variability’

    Will my comment disappear if I provide the link?

    1. David Rose's list of alleged sensationalist buzz words is longer than that of Anthony Watts. Anthony mainly only uses one word: "claim". David, being a professional denier writer, has collected more: "emerged" "alleged" "claim" "admitted".

      He'd probably be forced to admit that he overuses his alleged claims though.

      PS You can always archive anything iffy and provide a link to that. Links to mainstream newspapers are okay - even the gutter press that David writes for. is okay by me.

    2. The Daily Mail (also Daily Fail, Daily Heil) is an atrocious 'newspaper' full of racist, sexist, homophobic, child fantasy, right-wing lies and hatred. I would implore no one visits their website. Sometimes, though, it can be accidental. I have a Chrome plugin called "Kitten Block" which redirects you to the Tea and Kittens page. It's also available for Firefox :) http://www.theguardian.com/media/mediamonkeyblog/2011/mar/28/kitten-block

    3. vitaminccs

      I could not agree more and here is one expert in such black arts:


      notorious also for her vitriolic stance on the MMR imbroglio.

  2. Sou -

    The "Nasa" isn't a mistake, it's just the Daily Mail's in-house style. A lot of British newspapers will avoid using all caps for government bodies. See this elsewhere on the site. Or this from a New Zealand news site.

    So, I guess that's one thing he didn't lie about.

    1. Thanks, Tony. It's odd. Why do they use US instead of Us or Usa I wonder :D

    2. It might even be the default behaviour of their word processor. In Microsoft Word if you type in "nasa" as the first word on a line, it changes it to "Nasa". You have to type in "NASA" for it to remain all caps.

      Just a thought.

    3. The convention is that if an acronym is spelled out when spoken, ie U-S-A, then caps are used, whereas if it pronounced phonetically, ie Nasa, then only the first letter is capitalised. I think it's silly and that caps should always be used to denote that it's an acronym, but that does seem to be common to most newspapers in the UK now.

    4. Yet it's inconsistent, or do people in the UK say gee-eye-ess-ess when talking about NASA's GISS?


    5. Hmm, good question. In think in that context it makes sense to capitalise GISS because people will not generally be familiar with the organisation and it is pointing out that it is an acronym. But that kind of inconsistency is exactly why I would capitalise all acronyms.

    6. I agree, though, just newspaper style. As the saying goes, "Reason? Hell, there isn't any damn reason. It's just our policy." I used to work for a newspaper once upon a time.

  3. This might seem a weird question, but anyway...

    Are the error bars for any given year completely independent of all other years, or do the issues that lead to error margins point in the same direction all the time (but we don't know which)? Or is it a mix of both?

    To give a (rather lengthy) example of what I'm talking about suppose the anomaly for Year X is +0.7 +/- 0.05 and the anomaly for Year Y is +0.61 +/- 0.05:
    If the errors are independent, Year X could be a low as +0.65 and Year Y could be a high as +0.66, so there is some small probability that Year Y was hotter than Year X.

    If not independent, Year X could be as low as +0.65, but if so then all other results are likely to be at the bottom end of their error range (ie the factors that created the error are more or less constant across the whole data set), so Year Y is very unlikely (in this case) to be hotter than +0.56. Alternatively, if we consider that Year Y might be at the upper end of its error range (+0.66), then it is likely X is also at the top end of its range (+0.75), and Year X is definitively, 100% guaranteed to be the hotter year, even if we don't know exactly how hot it was.

    The discussion I see on this sort of thing suggests its the former (or always assumes it at least), but in my work (business analysis, not very sciency I'm afraid) I see a lot of cases where the latter case is a better representation of what is going on - while the ranges of uncertainty overlap, the same factors are at work throught the dataset, and if they push one data point high or low, they are doing it for all.

    1. Interesting question, Frank. A similar thought flitted through my head when I was writing these articles, but I didn't know the answer, so the thought disappeared as quickly :)

      Nick Stokes would be a good person to ask (or Gavin Schmidt or someone else from NASA, or the Hadley Centre, or Kevin Cowtan and Robert Way). The other day Nick wrote about the main sources of year to year difference. He plays with the data a lot, so he might have thought about that very issue - or not.

      PS It worked again :)

    2. I will have a try, although maybe I don't really understand.

      The uncertainty is calculated for the year they calculate the value for. The uncertainty is expressed as a range. It usually represents the 95% (2 sigma) certainty range ie there is a 95% probability that the actual value falls in the calculated certainty range.

    3. Frank D,
      I see you did ask at Moyhu, and I'm sorry that it initially went into spam. It's there now. I think you have a very relevant question, to which I've tried to respond.

    4. Cheers Nick, don't worry about the spam thing, it happens to me on many blogs (including, intermittantly, here), nobody knows why...

      I saw your reply, thanks - I mispoke in linking it to spatial sampling error. What I meant to say is that what I'm thinking of is similar to your reasoning about why spatial sampling error is not really a factor, (since 2014 is built from essentially the same set of stations as 2010).

      I suspect that in this case of comparing like-to-like the probability that 2014 was the hottest is rather higher than the 38% - 48% (or 1.5 to 3 times as likely as any other year) that is being advanced.

      Harry, to explain a bit further, suppose we have a station that complies with all standards, but because of some local topographic anomaly or the type of paint used on the screen or whatever, tends to read fractionally high or low (but still within the standard measurement uncertainty for that equipment). If you want to compare it to another instrument, the errors of the two instruments are independent (as per my case 1), but if you are comparing one instrument's results from one year to the next, in the simple case, the factors that led to its error last year are the same as this year (as per my case 2), and the measurement uncertainty in relative terms should be much lower. That's just a simple one for one instrument, but I'm wondering if there are larger scale factors that cannot be ignored in absolute terms, and are thus embedded in the margin of error (how hot was this year?) that can, in fact, be ignored when looking at the data in relative terms (was this year hotter than that year). Nick identifies (at Moyhu) spatial sampling as one of these possibles (at least for comparing years that are very close together), I was being more generic having less of a grasp of the detail.

  4. Speaking of deceive, I've spent the day writing R code to load up the data that was 'referenced' in the Monckton paper, "Why models run hot: results from an irreducibly simple climate model"

    Now in Fig1.,it says "vs. observed anomalies (dark blue) and trend (bright blue), as the mean of the RSS, UAH, NCDC, HadCRUT4 and GISS monthly global anomalies [9–13]"

    And in those references 9-13 are two satellite data series, and three surface data series (well not really, the HadCRUT4 is just a paper, not a link). Now it even says that "mean of the RSS, UAH, NCDC, HadCRUT4 and GISS monthly global anomalies", but guess what. It's all a fraud. Only the RSS and UAH data is used in the chart. (There is a clue, as the dope forget to actually change the friggin graph. At the top in blue it says RSS + UAH)

    I actually downloaded all the datasets, used R to consolidate the data and create a mean. I then plotted them, and guess what. It doesn't look the same. When I plot just the RSS and UAH the graph I get is exactly the same as what is in the paper.

    What a deceptive charlatan.

    I really hope this gets out and his fraud of a paper is retracted.

    1. Just so you can all see,

      Here is the mean of surface and satellite, i.e. what was claimed

      Here is the mean of the satellite data, i.e. what was in the paper.

      I've tried to keep the axis the same so they can be all easily compared.

    2. I’m shocked, shocked to find that a misleading graph got into the Monckton et al paper!

      Round up the usual suspects, where are the gremlins that caused this ... minor error that has no impact on the conclusions!

    3. DJ,

      (1) Well that figure should show the 95% confidence intervals of the satellite derived data set trend line (with flared/tapered ends). It does not.

      (2) The FAR trend line should also start with the 95% confidence interval bounds associated with the uncertainties of the underlying time series (in other words, not a point start but an interval start). It does not.

      (3) Doing (1-2) above, if these confidence intervals overlap throughout, one would call Figure 1 a non sequiter.

    4. Here is what the first figure in the Monckton paper SHOULD have looked like.


      It uses a mean of the surface temperature datasets compared to the IPCC FAR projections. Given that the reality is so different to the fantasy that Monckton is trying to portray it's a wonder how the paper was even published. Explains why a second rate Chinese publication was chosen.

    5. Also, the FAR projection used by Monckton is the wrong one. The report used 4 scenarios A-D, Monckton cherry-picks the highest, scenario A and misrepresents it as the IPCC projection. We now know that the forcings in Scenario A were an over-prediction. By 2011 we had not reached the 2000 value in Scenario A for CO2 forcing. Scenarios B-D were closer and their associated temperature trends were between 0.1 and 0.2C/decade. In other words, even in 1990 the models used by the IPCC were making accurate predictions.

      Monckton really is full of it.

    6. Monckton admits he is not doing science

      I recieved the following reply to the above point at WUWT:

      "At last, some genuine and valid criticism. I had not recalled that IPCC had made its 1 k by 2025 prediction under Scenario A. However, Scenario A was its business-as-usual scenario, and it had incorrectly predicted a far greater rate of forcing, and hence of temperature change, than actually occurred."

      So, his paper plotted a cherry-picked temperature prediction even though he must have known that it was not based on real-world data, and presents this as 'empirical evidence that the models run hot', when, as he now concedes, it is evidence of no such thing.


    7. "I had not recalled that IPCC had made its 1 k by 2025 prediction under Scenario A."

      That's really, really bad science - to add a line to a figure where you do not know the constraints.

      Now, he has confirmed that it was wrong, but it looks at the same time is trying to get around it, too, by claiming this means the IPCC scenario A was wrong and therefore the prediction is still wrong.

    8. Its classic Monckton. I guess he's being using this chart so long, he's forgotten it is bogus. The IPCC published 4 scenarios in the first Assessment Report, labelled A-D. Nobody knows in advance how forcings will develop so the IPCC run their models against a range of possible scenarios, low to high, and publish the predicted temps from each. They labelled Scenario A as 'business as usual', meaning high coal usage, and low emission controls. In fact we now know that the actual forcings ran some way below Scenario A, closer to B-D which all had similar values in the early decades. This was due in some measure to the collapse of the Soviet Union, rather than emissions control, an event few people could have foreseen, and the reason the IPCC use scenarios! The actual 2011 forcing number for CO2 is in the paper but this did not prevent Monckton from presenting Scenario A - and only Scenario A - as the IPCC projection.

      So, one of the IPCC scenarios turned out to be an overestimate compared to real-world observations; if the range was correctly chosen, this will always be the case. But the topic of the paper was not IPCC projections, it was 'why models run hot' and the figure appeared in Section 2 'Empirical evidence of models running hot'.

      In fact, Monckton is so pleased with his FAR prediction, it appears again, no more legitimately, in Fig 2.


    9. Phil, this misrepresentation of the FAR is quite serious. Since you've managed to get some responses out of Monckton, implicitly acknowledging it is wrong, perhaps you may want to inform the journal of this misrepresentation.

      I know some have suggested writing a rebuttal, but the journal apparently has page charges, and do people really want to go through that effort?

    10. Marco - there's so much wrong with the paper - this is just the facet that I investigated in detail - that I suspect a correction would be several times larger than the original, so, while I may drop the journal a line, I think the best course is just to let the article sink without trace.

  5. A Rose by any other name would smell just like Andrew Bolt.

  6. Very nice job Sou!
    So good and concise, I couldn't resist mirroring it over at my place.

  7. Hi Sou,

    I somewhat belatedly discover that you have recently been invading "Snow White's" territory!

    It's hard to know where to start when deconstructing David Rose's particular brand of allegedly "investigative journalism". Perhaps one might start here?


    P.S. Sorry about all this:


  8. Replies
    1. And the article contains a Godwin :-)

    2. Thanks for the heads up Lars.

      Fray joined!

    3. Delingpole uses Rose as support. Can either of them show science Gcse if they put their qualifications together?

    4. No, judging by the nonsense that goes into (virtual) print under their byline at least.

      Shouldn't that be GCSE? (In joke!)

    5. Eeewww! A direct link to a Delingpole article.

    6. Quite right Anon. Try this one instead?


      I currently find myself "debating" with someone over there who claims that "According to recent Met Office data average global temperature decreased very slightly during this decade". What planet do you suppose (s)he is living on?

    7. @Jim Hunt

      I saw your "debate" over there. You are made of sterner stuff than me. Good luck.

      Eeewww! An indirect link to a Delingpole article.

  9. Sou - Tried to reach you on Twitter, but no joy so far, and things are warming up fast over here, so....

    A slightly modified version of this article is now visible as the first ever guest post over Great White Con:

    "Tricks Used by David Rose to Deceive"

    Please advise of any issues ASAP.

  10. Sorry, Jim. I'm flat strapped at the moment. Saw your article last night - good one :)

  11. "I'm flat strapped at the moment"

    I know that feeling!


    Thanks once again for your assistance. Hopefully saving the planet from David Rose's flights of fancy will be easier under IPSO than it was with Paul Dacre at the PCC!

  12. PS. For those interested in a list of sources exposing David Rose various misrepresentations and lies, see:
    "Profiles in climate science denialism, David Rose the yellow journalist"


Instead of commenting as "Anonymous", please comment using "Name/URL" and your name, initials or pseudonym or whatever. You can leave the "URL" box blank. This isn't mandatory. You can also sign in using your Google ID, Wordpress ID etc as indicated. NOTE: Some Wordpress users are having trouble signing in. If that's you, try signing in using Name/URL. Details here.

Click here to read the HotWhopper comment policy.