.

Wednesday, July 23, 2014

James Risbey and co: Another perspective on surface temperature observations and climate models

Sou | 12:13 AM Go to the first of 44 comments. Add a comment

The new Risbey paper that so puzzled Anthony Watts and Bob Tisdale and caused them to make public fools of themselves yet again, was not an evaluation of climate models. It wasn't an evaluation of model's ability to emulate ENSO. The research was answering the following question:
How well have climate model projections tracked the actual evolution of global mean surface air temperature?
Their answer was:
These tests show that climate models have provided good estimates of 15-year trends, including for recent periods and for Pacific spatial trend patterns.

Perennially Puzzled Bob Tisdale gets it wrong again


Bob Tisdale writing at WUWT (archived here) gets so much wrong in his article about the Risbey paper that it would take several articles to demolish every item. I'm not about to go to all that trouble. Let me just list a few things he got wrong:

If I had to pick one mistake out of all the mistakes Bob made, apart from not understanding basic thermodynamics and conservation of energy, perhaps Bob's biggest mistake is that he thinks CMIP5 climate models are designed to model day to day and year to year real world weather for the next several centuries. They aren't. That's an impossible task. It would mean being able to accurately predict not only random weather fluctuations but also every action that could affect weather. Such as how many aeroplanes are going to be flying where and when. Where and when the next volcanic eruption will be and how energetic it will be and what will be the composition of the stuff that blows out of it. How the sun will vary over time. Plus being able to find a computer of a size, and people to code, every single possible present and future interaction between the air, the land, animals, plants, the oceans, the ice, clouds, rivers, lakes, trees, the sun and outer space.  Humans are good and computers are powerful, but not that good and not that powerful. It's not just random fluctuations and disturbances in nature. We also affect the weather. Scientists model climate with those big computer models, not day to day weather.

I've written more below about the difference between models that are used to make weather forecasts and models used for climate projections - with some examples and links to further reading.

Climate models and natural internal variability - if in phase it's pure chance


Before talking about any of the hows and whys of Bob Tisdale getting it wrong, let me follow up my article from a couple of days ago and talk more about the Risbey paper itself and climate models in general. If you want to read more about climate models, I recommend the article by Scott K. Johnson at Ars Technica.

The abstract and opening paragraph of Risbey14 is important to understand if you are wanting to know what the paper is about. In particular these sentences:
Some studies and the IPCC Fifth Assessment Report suggest that the recent 15-year period (1998–2012) provides evidence that models are overestimating current temperature evolution. Such comparisons are not evidence against model trends because they represent only one realization where the decadal natural variability component of the model climate is generally not in phase with observations.

Climate vs weather


The bit I put in bold italics is what this paper is all about. Some people wrongly think that climate models designed to make long term projections should also reflect natural internal variability happening at exactly the same time as it happens in reality. Why they have that expectation is anyone's guess. As you know, even weather forecasts that are primed with the most recent weather observations can only predict weather a few days out at most before chance takes over and they head off in all sorts of random different directions. Climate models that don't have recent observations plugged in, but rely on physics, will include natural variability but that random natural variability will only occasionally be in sync with reality and then purely by chance. It's the effects of long term forcings like increasing greenhouse gases that are evident in climate projections and that's what we are most interested in from them. In other words, long term climate models are for projections of climate not weather.


How realistic are climate model projections?


The Risbey paper was looking to see if climate model projections were overestimating warming or not. It took a different approach to that taken by other studies. Other studies have looked at the question from different angles. Stephan Lewandowsky has explained three previous approaches in his article about the Risbey paper. You can read about them there, there is no need for me to repeat them here.

I will repeat the following from Stephan's article though, because it's a point that science deniers ignore. Observations remain within the envelope of climate model projections. As Stephan showed, this is illustrated by a chart from a paper by Doug Smith in Nature Climate Change last year.

Source: Smith13 via ShapingTomorrowsWorld
The chart above shows three things. First that the overall trend is up. It's getting hotter. Secondly observations are within the envelope of model projections. Thirdly that over time temperature goes up and down and doesn't go up at a steady pace. The bottom part of the chart is the trend per decade. It goes up and down but has mostly been in positive territory since 1970.


Predicting weather and climate


The Risbey paper sets the scene by describing the difference between a climate forecast and a climate projection, then the difference between weather variability and climate. It describes a climate forecast as attempting to take account of the correct phase of natural internal climate variations whereas climate projections do not.

Apart from any practical considerations, there are good reasons for this distinction. For climate projections looking ahead from decades to centuries, it doesn't matter much when natural internal climate variations come and go on a year to year basis. The interest is in the long term overall picture. For the next several decades and centuries, the interest will be in how much surface temperatures will rise, how quickly ice will melt, how soon and by how much seas will rise and where they will rise the most etc.

Even for projections over decades, we are most interested in regional climate change. Not interannual fluctuations such as ENSO variations so much as long term changes in the patterns of rainfall and temperature. ENSO affects weather. It will happen without climate change. The more important questions for the long term are things like: Will a region be getting wetter or drier? Will it be subject to more or less drought? Will the annual pattern of precipitation change, which will affect agricultural production, water supply management, flood control measures?

Climate models aren't yet able to be relied upon for projecting regional patterns of climate change with great accuracy but they can provide a guide.

The point is different models are adapted and used for different purposes. There are models used to predict short term weather. Some forecasts are fully automated (computer-generated) and others have human input. They are only useful for looking ahead seven to ten days. They are good for guiding decisions on whether to pack an umbrella, plant a crop, be alert for floods or scheduling construction tasks.

There are models that are constructed or adapted to make medium term weather forecasts, like those used by the Bureau of Meteorology to predict ENSO looking out over a few weeks to months. They are useful for making farm management decisions, utility companies making decisions on water storages (store or release water from dams), putting in a rainwater tank if it looks as if the next few months will be dry etc. They are forecasting weather not climate.

Then there are models adapted or constructed for longer term regional outlooks and models developed to look at the world as a whole.


Energy moves around between the surface, the ocean and the atmosphere


Climate models are used for more than just surface temperature. It's surface temperature that probably gets most attention in the media and on denier blogs. The world as a whole is warming up very quickly. Different parts of the system heat up at different times at different rates. Sometimes the air heats up more quickly than others. Different depths in the oceans heat up at different rates at different times too. Ice melts, but not at a steady pace. All that is because all these different parts of the system are connected. Heat flows between them. Anyone who goes swimming will have experienced the patches of warm water and patches of cold water in lakes, rivers and the sea. Just as heat is uneven on a small scale, it's uneven on a large scale.

I know some of you will be wondering why I'm taking readers back to climate kindergarten. Well if you read some of the comments to the previous article you can guess why. And if you manage to wade through even a part of Bob Tisdale's article at WUWT and the comments beneath, you'll get an even better appreciation.


CMIP5 projections are based on climate models not weather forecasts


All that is a prelude to the Risbey paper. It was looking at whether or not the recent global surface temperature trends are any different to what can reasonably be expected from model projections, given natural internal variability in the climate.

So the first thing to understand is that the CMIP5 climate model projections used by the IPCC will not generally model internal variability in phase with that observed over the model run. They do include internal variability but it's a stochastic property. It's purely a matter of chance when any model will show an El Niño spike or a La Niña dip in temperature for example. Whether a particular spike or dip lines up with what happens is pure chance. Sometimes a model run will be in phase with the natural variability observed and other times it won't. It's not important. Over time the natural internal variability cancels out. It's the long term trends that are of interest here, not whether an El Niño or La Niña happens at a particular time.


The Risbey approach


I call it the Risbey approach because James Risbey from Australia's CSIRO was the lead author. However I believe it was Stephan Lewandowsky who came up with the idea to take a look. Stephan thought it would be interesting to see what would be the effect on observations if the modeled natural internal variation was in phase with those observed.

Recall the point about climate models incorporating internal natural variation, but not necessarily in sync with when it happens in reality.  So what the team did was to look at windows of fifteen year periods and scan for model runs that were most closely aligned with observations, taking ENSO as the main measure of internal natural variability.

From the perspective of interannual internal variability, the factor that affects surface temperature as much or perhaps more than any other is ENSO. El Niño warms the atmosphere and La Niña has a cooling effect on the atmosphere. What happens is that heat is shifted between the ocean and the air. If there was no global warming trend, the surface temperature would go up and down with ENSO, leaving a long term trend of zero. (I've written a long article about ENSO, which includes references to good authorative sources.)

This brings us to the opening paragraph of the Risbey paper. On a short time frame, to see if models are reasonable, one needs to look at models that forecast. In other words, models that include natural internal variation that is in phase with what actually happens. Short to medium term forecast models do this by initialising them with the most recent observations or incorporating of the latest observations.  Most climate projection models are independent of recent observations. They are based on physics not live readings of what is happening. So if they happen to be in phase with internal natural variability at any time, it will only be by chance.

If you want to allow for natural internal variability and compare models with observations, one way you can do that is to look at model runs that happen to have a period of natural internal variability in phase with observations for the period of interest. That's what the Risbey team did.

Risbey14 looked at individual model runs and compared them with observations. For each fifteen year period they selected the model runs that were in phase with real world observations in relation to ENSO. They started with 1950 to 1964, then 1951 to 1965, then 1952 to 1966 etc. After selecting the models most closely aligned with ENSO phases observed for one fifteen year period, they moved up a year and looked at the next fifteen year period and so on. As well as that they were able to select, for each fifteen year period, the models that were most out of phase with ENSO.

Here is a conceptual visualisation of what they did:




Don't take too much notice of the actual data. It's a composite CMIP5 with GISTemp. And I'm not suggesting they eyeballed like that. They didn't. The above is just to get across the concept of what was done. In practice, the researchers selected model runs on the basis of their similarity in timing with real world observations of Niño 3.4 and related spatial patterns of sea surface temperature in the Pacific. [Correction: Stephan Lewandowsky has advised that during the selection phase they didn't look at the spatial patterns beyond Niño 3.4, which makes sense in the context of Figure 5 discussed below.] The image above is just so you get the idea that they looked at fifteen year periods starting with the period from 1950 to 1964.


The meaning of "Best" and "Worst"


That leads me to the discussion of "best" and "worst" that you may have read about and that Bob Tisdale got so wrong.

The research was not evaluating models. There is labeling on Figure 5 in the paper, which uses the words "best" and "worst". However there is no suggestion that the four "best" models are in any way superior to the four "worst" models in terms of what they are designed to do, which is future projections of climate. The word "best" denoted the subset of model runs in any fifteen year period that were most in phase with observations for ENSO. Conversely the word "worst" denoted the model runs that were least in phase with ENSO observations.

The paper compares the spatial pattern of temperature variation for the period 1998 to 2012. It compares models most in phase with the ENSO regime with observations as below. The real world observations are on the right: (click to view larger size)

Source: Figure 5 Risbey14

It also compares models most out of phase with the ENSO regime with observations:

Source: Figure 5 Risbey14

The point being made was that the model runs most in phase with the real world ENSO observations had a "PDO-like" spatial pattern of cooling in the east Pacific. Look at the above charts close to the equator near South America.  In the top left hand chart, while not as cool, the overall pattern is closer to the observations on the right. In the bottom chart, the warming is smudged all over and it doesn't show the cooler east Pacific. Figure 5 showed that the model runs most out of phase showed a "more uniform El Niño-like warming in the Pacific".

Bob Tisdale was thrown by the word "best" and "worst". He also seemed to think the climate model runs should have been identical to the real world. He's wrong because he doesn't understand what CMIP5 climate models are for or how they work. As the paper states, when you select only the model runs in phase for the period, you get much closer spatial similarity, not just a closer match for the surface temperature as a whole. When you lump all the climate model runs together, the multi-model trends average out the internal variability. Models will cancel out the effect of each other when you lump them all together, because on shorter time scales, it's only by chance that some have internal variability in phase with the real world.

When it comes to "best" and "worst", the paper was only referring to the extent to which selected model runs were in phase with the real world. There is no suggestion in the paper that any one model was any better as a model than any other. The research was not an evaluation of climate models. In fact, different models were in and out of phase in different fifteen year windows. Just because by pure chance a model run was in phase with ENSO over a particular time period did not mean that same model run was in phase with ENSO in other time periods.

Models will go in and out of phase with the real world over time. That randomness is intrinsic to climate models. The models are designed to respond to forcings like increasing greenhouse gases and changes in solar radiation. They exhibit internal variation too, because the physics is built in. But it won't generally be at exactly the same time as it happens in the real world. (If there were long term models that could do that, then most weather bureaus would be out of business.)

Now, if you have a large enough sample, then in any fifteen year period some model runs will line up pretty well with natural fluctuations in the real world. Just by chance. So if you want to see a short period of time, like say the last fifteen years, you can see if any model runs line up with ENSO over that period and if they do, how close are they overall to global surface temperature observations.

That's probably the nuts and bolts of the paper. Comparing all model runs for a short period like the last few years won't tell you a whole lot about whether the models are realistic or not. That's because individual model runs won't necessarily be in sync with the real world in regard to natural variability. They are expected to reflect the dominant climate forcings but not the exact timing of ENSO for example.

By looking just at model runs that just happen to be in sync (by chance), you can see how much they vary from the real world observations. This study showed that once you allow for the fact that models won't be in phase with natural variability, then the models are even closer to real world observations. I'll finish with the two figures showing trends, that Stephan had in his article. The one on the left is from the "in-phase" model runs compared with observations. The one on the right is those least "in-phase" compared with observations. As always, click to enlarge.

Source: Risbey14 via Shaping Tomorrow's World


Further reading:




Acknowledgement: Thanks to James Risbey for answering my naive questions so promptly. Thanks to Stephan Lewandowsky for describing the work so well. Any mistakes in this article are mine. It wasn't checked by any of the paper's authors before publication. Since then, I've made one correction, thanks to Stephan L. And thanks to all those who made available a copy of the paper so I could write this article - RN, AS, JR and SL.

Note: updated with more links to WUWT. Sou 11:14 pm 24 July 2014


James S. Risbey, Stephan Lewandowsky, Clothilde Langlais, Didier P. Monselesan, Terence J. O’Kane & Naomi Oreskes. "Well-estimated global surface warming in climate projections selected for ENSO phase." Nature Climate Change (2014) doi:10.1038/nclimate2310

44 comments:

  1. > model runs were in phase with the real world

    I haven't read the paper yet, but I think this is dubious language. Saying that (as they do) suggests roughly like there is a 5-year ENSO wave, and they're picking the ones in phase. But what they are actually doing is picking the models that have the right trend in global temperature over a 15 year period (I think). In which case, there will be times when a model with no ENSO at all will match.

    ReplyDelete
    Replies
    1. William, from the paper:

      To select this subset of models for any 15-year period, we calculate the 15-year trend in Niño3.4 index in observations and in CMIP5 models and select only those models with a Niño 3.4 trend within a tolerance window of 0.01K y-1 of the observed Niño 3.4 trend. This approach ensures that we select only models with a phasing of ENSO regime and ocean heat uptake largely in line with observations.

      Delete
    2. This comment has been removed by the author.

      Delete
    3. Ah, OK, just the Nino3.4 region but my point still applies: they aren't doing anything as sophisticated as picking up "in phase"; they're just selecting for trend, as your quote shows.

      That means that, for example, if the real world was a 0.2 oC/decade trend, with a 2 oC amplitude "ENSO" with a 5-year period, then a model with a simple 0.2 oC/decade trend would be accepted as a match. No?

      Delete
    4. I don't understand the point you are making, William. They selected runs based on the Niño 3.4 region, not on the global surface temperature.

      This passage from the methods description may help:

      Climate models cannot simulate every ENSO phase transition, but that is not critical here. To approximate the phase of ocean heat uptake in a 15-year period one may need to capture only the general sense of whether the models are El Niño or La Niña dominated over the period. To this end we use Niño3.4 as an index of ENSO phase and calculate the trend in Niño3.4 over the period to indicate the general tendency of the system towards regimes dominated by either ENSO state (similar to PDO phases). Models with a similar Niño3.4 trend to observations are then selected to represent the subset of models from the projection ensemble that just happen to have a similar response of ENSO over the 15-year period. This approach selects models with a statistical similarity that is related to the desired feature (ocean heat uptake). The approach does not guarantee any kind of dynamical consistency between models for each 15-year period.

      The choice to fit 15-year trends to Niño3.4 is not the only way to estimate the phase of ocean heat uptake rates. We also tested a method based on trends of low-pass Niño3.4 values to more directly mimic the PDO. Results were repeated where we detrended the Niño3.4 series in observations and models and compared the low-pass Niño3.4 slopes to select in-phase models. The results for the low-pass method are very similar to those for the direct method shown.

      Delete
    5. Errrm, well, if you didn't understand my maths I'm not sure I can help.

      They only compare the overall trend over the 15 year period. Any pattern of temperature change (in Nin03.4) that produces the same trend will match. The excursions during that period could be huge, or tiny, or completely different to reality; as long as the overall trend matches.

      I think its dubious to call that matching "phase", because a model with no ENSO at all could, indeed will, match sometimes.

      Delete
    6. It wasn't your maths I didn't follow, William. It was what you wrote and trying to fill in the gaps of what you didn't say to figure what was in your mind that you were trying to say ;)

      I hope what I wrote helped. Given that it's only a fifteen year window and only a relatively small area of the ocean, I'd not think "the excursions" are likely to be hugely huge or hugely tiny.

      For readers - William seems to have understood in his third comment. The scientists looked at the trend in the Nino 3.4 region, not the overall global trend in temperature, to select model runs.

      If there is a change in the Nino 3.4 region it is reasonable to view it as closely related to ENSO (and PDO phase). As indicated in the paper, that doesn't necessarily mean the model will be an exact match of the real world over that period, but it will be in the vicinity. Model runs that are very different in trend in this region over the fifteen year period will obviously not be "in phase" at all.

      As they stated (see my previous comment), they looked at it from another perspective as well.

      Delete
    7. Correction: any excursions from the real world could be hugely tiny, but probably not hugely huge.

      Delete
    8. It's as well to get such details right if we're going to reference this paper in future - and I think we're going to, especially fig 4. It pretty much puts to rest"models have failed" and the mythical Pause, where they join the "no consensus" theme.

      Delete
    9. "I think its dubious to call that matching "phase", because a model with no ENSO at all could, indeed will, match sometimes."
      There are no "models with no ENSO". All of them exhibit ENSO like behaviour as an emergent property of the underlying physics. If you do enough model runs, some of them will produce ENSO events in synch with observations. Those ones track the surface temperature and spatial effects more accurately.

      Delete
    10. That's the paper in a nutshell, Anonymous :) I'll just add to "If you do enough model runs, some of them will produce ENSO events in synch with observations" ... for some periods of time.

      Delete
    11. The more that is written about this paper, the less that I understand it.

      I've asked for a copy.

      But at the very least they are ambiguous in their selection criteria, for instance, I would have minimized the RMS between model and observation. Also, if different AOGCM's were the 4 worst/best for any 15-year period, and these are different subsets of the 18 AOGCM's for each 15-year period, then BIG problems arise.

      Finally, my understanding of the CMIP5 simulations is that actual climate forcings were used to "tune" the various AOGCM using the method of hindcasting up through circa 2005, after 2005, the forcings were based on projected forcings.

      This begins to sound like a patchwork quilt of "covering the roulette table" meaning that one is bound to win each and every time, but the payout is less than the total bet.

      ;(

      Delete
    12. What part do you find ambiguous, Everett? I'll see if I can make it clearer.

      Why are there BIG problems with different subsets being different when it comes to the timing of different natural variability?

      Remember, CMIP5 models aren't weather forecasts. They model climate change not forecast day to day or year to year weather. If climate models could be used to forecast weather variability like ENSO, we'd know when ENSO phases would happen looking out decades. We can't even be sure a few months out because of the random nature of internal variability.

      What's RMS?

      Read the Ars Technica article to see how the models are tuned. They aren't curve fitted to the past.

      http://arstechnica.com/science/2013/09/why-trust-climate-models-its-a-matter-of-simple-science/2/

      Delete
    13. I am quite familiar with the Finite Element Method, having taken two courses (one as an undergrad), and much work experience of those modelling technologies in my actual work experience at the USACE ERDC WES CHL, thank you very much.

      RMS = Root Mean Square.

      Now to begin, and to paraphrase someone else:

      "We're sitting here, I'm supposed to be the franchise AOGCM, and we're in here talking about practice"

      Now, we've all purportedly been told that 15-years does not establish a LINEAR trend (BTW, I never bought that one even if N = 30 years, see 1940-1980 SAT for a prime example of that flawed logic).

      We're talkin' 'bout 15-year linear trends, correct? Starting in 1950-64, 1951-65, ..., 1998-2012. So where are the tabulated R^2 statistics for the 18 AOGCM's?

      What's the horizontal and vertical resolutions of the AOGCM's? In air, in water?

      How do the AOGCM's model mixing at such large spatial scales?

      Do they model both horizontal and vertical mixing processes, specifically in the oceans, very important that, the oceans that is?

      Gradient Richardson Number (for vertical mixing of a stratified flow field).

      On this subject, I"m light years ahead of you, when I actually see someone publish the actual stratified oceanic flow fields and get those basic features correct (spatial thermohaline circulation, spatial downwelling, spatial upwelling, vertcal mixing, horizontal mixing and in a global RMS fashion, that would begin to be a good starting point for AOGCM's.

      Now I really do need to read said paper.

      Delete
    14. I didn't mean to come across as rude, Everett, but I can see you took it that way so my apologies.

      The researchers were just matching up trends in the Nino 3.4 region (on the basis that it's a good proxy for the natural variability that shows through in global surface temperature fluctuations for short time scales). That's all. Nothing too fancy.

      The paper explains it well. At least I thought so. Stephan's article is useful too. So is Dana's, which I linked to above.

      Mine, obviously isn't so clear :(

      Delete
    15. I found your post to be pretty clear Sou. Maybe that's because I'm in the same solar system as you, rather than flying several light years ahead of it.

      Delete
    16. > I am quite familiar with the Finite Element Method

      Alas, the models are finite difference, or spectral.

      > What's the horizontal and vertical resolutions of the AOGCM's?

      If you want to know, why not look it up? e.g. http://www.metoffice.gov.uk/research/modelling-systems/unified-model/climate-models

      Delete
    17. USACE ERDC WES CHL- US Army Core of Engineers, Coastal & Hydraulics Laboratory.
      http://chl.erdc.usace.army.mil/

      "I'm a engineer, and I'm okay.
      I sleep all night and I work all day.
      I cut down climate scientists, I eat my lunch.
      I go to the lavatory.
      On Wednesdays I go shoppin'
      And have buttered scones for tea."

      and so on.

      Delete
    18. That's the United States Army Corps of Engineers, Engineering Research and Development Center, Waterways Experiment Station, Coastal and Hydraulics Laboratory

      Read the paper. Very easy to understand and well written.

      I would say more on it's technical defects (the linear treatment of the ENSO 3.4 index being the most obvious), but you all know the Internet and blogs, half life ~2 days.

      Have a nice day.

      Delete
    19. To quote a comment from an earlier thread:

      "Aaahahahahahahahahaha
      What sheer presumptuous, arrogant nonsense."

      Delete
  2. It's funny how the Watties have seized on Fig 5 as the nut of the paper, when the real nut of the paper is Fig 4, which shows the effects of the model selection on global temperature trends.

    ReplyDelete
    Replies
    1. Bob keeps trying to push the barrow that all the models are wrong and none are useful. In fact all the models are wrong but some are very useful :)

      Bob doesn't understand a lot of things. He confuses long range climate models with weather forecasts. He's put up charts that any modeler would be proud of (or envious of) and says because one tiny little wiggle in one tiny little section was different in the real world, then the model is useless. Never mind that you couldn't hope for a closer match with reality.

      Delete
  3. An observation I haven't seen anyone make yet is that the results suggest that many models give useful results when they happen to produce a run aligned with ENSO. Having many useful models is reassuring.

    ReplyDelete
  4. FYI, data for global surface temperatures from Cowtan & Way can be found here:

    http://www-users.york.ac.uk/~kdc3/papers/coverage2013/series.html

    ReplyDelete
  5. Hello. That's a very nice read. But since you seem to be on such close terms with one of the authors, did you also ask him WHICH model runs "happen to have a period of natural internal variability in phase with observations for the period of interest."

    Because that is really what the "deniers" are complaining about. Replication. How can anyone replicate their results, if they don't tell the world which of the 18 they picked (never mind the semantics "best" or "worst").

    Looking forward to your answer !

    ReplyDelete
    Replies
    1. Sorry, I meant to point out that was one of the points that Bob Tisdale and Anthony Watts got confused about.

      To be clear, there isn't a complete set of model runs that are in phase all the time. The set of model runs that fit the criteria was different for each 15 year period 1950-1964, 1951-1965, ..., 1998-2012. There were 48 fifteen year periods looked at and for each one there were a number of model runs that fit the criteria of "best" fit and "worst" fit, or most in phase with Nino 3.4 and least in phase.

      "The subset of models is random in that the models have random phase for ENSO cycles relative to the real world and will come in and out of phase with the real world from time to time."

      The way to replicate the results is to do the runs yourself. It's a big job though, particularly downloading all the model data. I've put links in the article if you want to replicate it. The Nature article also contains links to the data and describes the methodology. All the information required to replicate the work is there in the paper and the data is all freely available on the internet.

      See also ATTP's comment in the earlier discussion.

      Delete
    2. When I wrote "do the runs yourself" I meant analyse the existing data, not run the models.

      Delete
    3. The answer is simple.

      Which models "happen to have a period of natural internal variability in phase with observations for the period of interest." -- all of them.

      Which moels "happen to have a period of natural internal variability out of phase with observations for the period of interest." -- all of them.

      As has already been said, the paper isn't comparing the models. It examines what happens when model runs *accidentaly* match oobserved ENSO conditions.

      And what seems to happen is that, if a model run accidentaly matches real ENSO it shows a warming trend similar to the observed trend.

      Delete
    4. Technical papers are not written with the expectation you need to educate illiterate people on the basics- they are written to an audience of people who are to borrow a phrase from the patent sphere- "normally skilled in the art".

      Now the point isn't to make a carbon copy of their work, it's to follow their general method your own way and see if you get a similar result.

      Here's how I'd do it, if I wanted to. You write a computer program that reads in the ElNino3.4 data (specifically the time and the index value). Then you scan in the ElNino3.4 data (again the time and index value) for all the model runs that report the index. Let's assume the time standards are already the same so we don't have to interpolate any values, then calculate the sum of normed residuals between the observed and modeled index in 15 year clusters. Thus for each model run you have a set of 15 year intervals and an estimate of how close the model came on average over the 15 year span to the ElNino3.4 Index. Easy enough if you have the data sets and know the rudiments of programming. You then sort your normed residual data set from low to high or vice versa. The runs that are closest to the ElNino3.4 will have low values and those that are further away will have large values. You then to some set of low and high alignment runs and compare their other outputs to other realworld measurements over the 15 year span. This isn't rocket science to do. But if Watts and his creatures can't figure this out on their own, that's their problem.

      Delete
    5. Captain FlashheartJuly 23, 2014 at 10:12 AM

      It's cute that the Watties have been complaining that models don't match data for the last umpteen years, but they never bothered to write a single paper investigating (or confirming) this claim - and as soon as someone does, with additional details about why, they complain about how bad it is...

      Remember the interminable comments of one rgbatduke, sometimes elevated to articles, in which he argued that modelers should identify those models that best match observed temperatures, and discard the rest? Well, that's basically what Risbey have done - also satisfying rgbatduke's demand to know why these models are "better" - and everyone at WTFUWT is complaining ...

      ... every time these clowns have one of their theories addressed by actual scientists, they in return offer yet more proof that they have zero interest in actual science. Whodathunkit?

      Delete
    6. Some of the skeptics get caught up by the idea if the models extended 150 years into the future, they project average world temperatures 10+ degrees hotter than today.

      But, if I'm reading this post and the comments correctly (BIG if), the y axes on the models don't really matter in this study. What matters is, broken into fifteen year intervals some subset of models match the observed climate fluctuations within a reasonable period of time.

      Delete
  6. "...because it's a point that science deniers ignore. Observations remain within the envelope of climate model projections."

    In my experience they do more than ignore it - they frequently misrepresent it by stating the observations have crashed out of the projections.

    ReplyDelete
  7. Two things to consider: The English speaking (writing) world is full of deniers, and Why Americans stink at math

    Trying to understand and explicate their errors is a step in the right direction and Sou has done an admirable job of explaining where they go wrong .... flashcut to Sou leading the class in repeating, "Weather does not equal Climate" ... "Weather does not equal Climate" ... "Weather does not equal Climate"

    Unforced natural variation results in a zero trend over time. So we also need to lead a few choruses of "NV =ZT", "NV=ZT", "NV=ZT"

    If a BT or any of the Watties could keep a few of concepts clearly in focus at the same time and then combine them in a coherent problem-solving method, they wouldn't be deniers. It's pretty much a defining characteristic.

    The question, though, that came immediately to my mind is: what does the average for the subsequent 15 year interval look like for the 'best' model runs? Is there any coherence over the next 15 years, or do they go in random directions?

    ReplyDelete
    Replies
    1. I found that Why Do Americans Stink at Math article to be quite fascinating, Kevin. The fact that most people perceived a 1/3lb burger to be *smaller* than a 1/4lb burger because, duh, 4 is bigger than 3... well, the mind boggles :-\

      Delete
    2. The next 15 years are irrelevant since "best" is specific to the 15 year period in mind. This is not a test of model runs ability to mimic the ENSO trend, it's a test of those which just happened to match the reality. That has no bearing on how well they match subsequently.

      Delete
  8. Sacre bleu !

    Does not Mssrs Watts et Tisdale go into too much details here, I thinking.
    This must be the minutia of all the mothers of all dissections, but why.
    Watts has wasted his time and our time in reading the nonsense.

    it is my view that is all.

    ReplyDelete
  9. Hi Sou, thanks for this. The biggest mistake that people seem to be making (Bob Tisdale and others) is confusing the models with the model runs, so thanks for clearing that up.

    Please could you clarify for me why the models include ENSO in the first place if over the long run the El Nino and La Nina cycles cancel each other out? It would seem that simpler models would still be able to project warming of C02 without attempting to put the ocean cycles in there... Apologies if it's a silly question!

    Rob

    ReplyDelete
    Replies
    1. Rob, if you read the Ars Technica article, it explains how models are constructed. The models don't "include ENSO" as such. They are based on physics so they emulate changes in the ocean and changes between the ocean and atmosphere, which is essentially what ENSO is.

      http://arstechnica.com/science/2013/09/why-trust-climate-models-its-a-matter-of-simple-science/

      The big CMIP5 models are designed by building in the physics (and chemistry and with some even biology AFAIK), which means they do demonstrate natural internal variation as well as human-caused forcing. In fact, the earth system neither knows nor cares what is forcing the climate. It doesn't discriminate on the basis of where the extra CO2 comes from, for example.

      By basing them on science, a more realistic projection can be made, taking account of melting ice, changes in albedo from devegetation and less snow and ice cover, longer term changes in ocean currents etc etc - all of which in turn impact on global climate.

      Someone else may chip in with a better explanation than I can give.

      The thing is that the internal variation is random. That plus the fact that the models are not initialised with the latest observations, means that they won't generally be in phase with what is observed. They are intended to help figure out what climate change will be over coming decades. They aren't intended to emulate year to year weather.

      There are other models built for shorter term forecasts (or similar models but modified for weather forecasting), which are updated with weather observations. Randomness for weather forecasting models also kicks in. However, because the latter are starting from a known set of weather observations, it will take a while for "random" to mess up the forecast.

      Anyone else, feel free to add your two bobs worth.

      Delete
    2. The fact that an ENSO-like behaviour emerges from climate models is worth noting.

      Delete
    3. Hi Cugel,

      Yes I thought that was remarkable too! Is it really correct that ENSO emerges entirely out of the fundamental ocean physics, and ocean<>atmosphere interactions? The ENSO process as a whole must be unimaginably complex, with many interactions and variables involved...

      Perhaps I am not thinking about this correctly? When people say 'ENSO emerges' do they mean a 'basic' version of the cycle (warm water at X, wind speed at Y etc) or something else?

      Again apologies if the question doesn't make sense. Modelling is a complex area and I don't want to give the impression that I should be able to understand the finer details without doing the proper science.

      Cheers
      Rob

      Delete
    4. Rob, if climate models could emulate all changes that take place in the system, that would be wonderful. You're right in thinking that they don't emulate nature perfectly. Just as there is no perfect "model" of the human body or the brain or a flying aeroplane, there is no perfect model of climate. Climate models are a tool but they have their limitations, which are well known to scientists who use them.

      My response still stands, though. The models are based on science rather than statistics, so they will emulate internal natural variability as well as response to external forcing.

      Climate models are constantly being improved. They are limited by computer technology among other things.

      "All models are wrong but some are useful". Together with looking at past climate change they can tell us a lot about what to expect in the future.

      However it's important to realise that Ridley14 does not suggest that the model runs emulated ENSO perfectly. Because a large proportion of shorter term temperature change is tied to ENSO, this study looked at model runs that were most in phase in the Nino 3.4 region.

      If you want to know how well CMIP5 and CMIP3 models capture the ocean atmosphere changes associated with ENSO you need to look for papers on the subject. You can always use google or google scholar. For example, the following paper gives some insights:

      http://ugamp.nerc.ac.uk/~ericg/publications/Bellenger_al_CD12s.pdf

      If you want to see climate models in action visually, wander through the articles on Isaac Held's blog and you'll see some animations as well as some interesting discussions.

      http://www.gfdl.noaa.gov/blog/isaac-held/

      Delete
    5. Rob : Systems that are complex in detail can still result in fairly straightfoward bulk behaviour. The basic structure that leads to ENSO involves easterly trade winds at the limits of the Hadley Cells pushing surface water from the Americas towards the Western Pacific. This causes upwelling in the East and a sub-surface return current from the West. The strength of this process is a measure of the ENSO state at the time - stronger trades means La Nina-ish conditions, weaker or even absent trades means El Nino.

      That's the bulk ENSO-like behaviour that emerges from models using only well-understood physical laws and real-life geography. A big plus for models, to my mind.

      Delete
  10. This comment has been removed by a blog administrator.

    ReplyDelete
    Replies
    1. Off topic as well as contravening other parts of the comment policy.

      Delete

Instead of commenting as "Anonymous", please comment using "Name/URL" and your name, initials or pseudonym or whatever. You can leave the "URL" box blank. This isn't mandatory. You can also sign in using your Google ID, Wordpress ID etc as indicated. NOTE: Some Wordpress users are having trouble signing in. If that's you, try signing in using Name/URL. Details here.

Click here to read the HotWhopper comment policy.