Climate science deniers in the main, do not understand why models are used in science. Nor do they typically understand how they are used, or how they are constructed.
Today Wondering Willis Eschenbach demonstrated this quite well (archived here). He wrote about a recent article in Science, by Professor Alex Hall. The article was discussing the merits and limitations of using General Circulation Models (GCMs) to model regional climate change, through a process known as down-scaling.
In his article, Dr Hall describes downscaling as follows:
The concept behind downscaling is to take a coarsely resolved climate field and determine what the finer-scale structures in that field ought to be. In dynamical downscaling, GCM data are fed directly to regional models. Apart from their finer grids and regional domain, these models are similar to GCMs in that they solve Earth system equations directly with numerical techniques. Downscaling techniques also include statistical downscaling, in which empirical relationships are established between the GCM grid scale and finer scales of interest using some training data set. The relationships are then used to derive finerscale fields from the GCM data.
He then goes on to discuss some of the limitations. In particular, GCM's typically have atmospheric biases, which are amplified when expanding to the regional scale. He also wrote about the strengths:
Fortunately, there are regionalscale anthropogenic signals in the GCMs that are not contaminated by regional biases. The best example might be the models' direct thermodynamic responses to anthropogenic forcing, most notably warming. Warming signals arise from hemispheric- to global-scale processes (5). Water vapor, cloud, and surface albedo feedbacks, as well as the ocean's relatively slow heat uptake, are the main factors that shape warming and its spatial distribution.
Alex Hall gave two specific examples of where downscaling enhances understanding of climate at the local level. One was the Great Lakes region in the USA and the other was the headwater of the Ganges River in India. About the latter, he wrote that "the high elevation headwaters of the Ganges River warmed by a further 1.0°C by 2100 beyond the warming projected by the GCM. The reason is that well-understood snow albedo feedback effects are not resolved by the GCMs. "
Alex concluded that downscaling can be of particular value to investigate climate change in regions having complex coastlines and topography. He wrote that only those GCMs with reasonably realistic atmospheric local circulation changes should be used (since it's atmospheric circulation that generally has the largest biases). Even then the results need to be examined to check they are realistic.
A model is relevant if it improves understanding
Willis Eschenbach thinks all this is hogwash. He decided to take a shot at Alex Hall for writing this:
The appropriate test of downscaling's relevance is not whether it alters paradigms of global climate science, but whether it improves understanding of climate change in the region where it is applied.
Willis Eschenbach thinks scientists have a time machine
Willis disagreed that a test of relevance is whether a model is, umm, sufficiently relevant to improve understanding of local climate change. He decided that the test of relevance is whether or not it matches observations.
Now I don't know if Willis has a time machine - or whether he thinks that scientists do. The next sentence that Alex Hall wrote was about the Great Lakes example he gave. Willis chose not to include it, probably because it would spoil his little jibe. Alex Hall wrote:
The snowfall example above meets that test. In many places, such fine spatial structures have important implications for climate change adaptation. In the urban areas of the United States and Canada most affected by lake effect snow, infrastructure and water resource planning must proceed very differently if lake effect snow is not projected to decrease significantly.
What Dr Hall was talking about was that regional downscaled model suggested that although in the future, the Great Lakes will not be frozen for longer, there won't be a decrease in local snow precipitation around the Great Lakes. This is because lake effect snow is possible for more of the winter. That effect cancels out the overall snow decrease (more precipitation falling as rain). By contrast, the full scale (not downscaled) GCM had a much larger decrease in snowfall, with rain replacing it. It was only in the more finely scaled regional model that the details showed up the increase in lake effect snow.
Here is the image that Alex Hall provided, demonstrating this. It compares a GCM (left) and a downscaled model (right) for 2050-2060 (compared to 1979-2001). Click to enlarge it:
Willis either couldn't understand the article he read, or he just felt like taking a pot shot at science. The fact that he thinks relevance is measured by future observations paints him as dumb ignorant. And in case you think that he didn't mean to wait until 2050 or 2060 to see if the model was relevant, here is a comment from him:
Willis Eschenbach wrote, in response to the quoted question (my emphasis):
January 5, 2015 at 6:15 pm
...So, how would you propose to compare model projections to future observations?
I’m gonna assume that this is a serious question, although it seems obvious. Two choices. Either:
1) Initialize the models on the first half of the historical dataset, and then compare the output of the models to the second half of the dataset, or,
2) Make the actual predictions, and then wait and see if they come to pass.
Not all that tough …
Best to you,
So Willis wants the scientists to wait for fifty or sixty years or more to see if this year's model output "comes to pass", before determining whether or not a particular regional model is relevant.
I wonder if Willis is confusing relevance with accuracy. As I've discussed, the paper does talk about the importance of accuracy. That's necessary for usefulness - but the actual usefulness is gauged by the extent to which a model increases understanding. Being accurate without adding to understanding has very limited value. Probably Willis doesn't care one way or another. He just wanted to prove how clever he is. And he did, didn't he.
Decision-makers are pushing scientists for regional climate projections
One of the main points of Alex Hall's article was that planners and policy makers are pushing scientists hard to tell them what to expect from climate change at the regional level. The planners and policy makers aren't going to put all their plans and policies on hold for fifty or sixty years while waiting to see if a regional model run from 2015 turns out to be relevant. They would much rather the scientists let them know if the regional model increased understanding of climate at the local level. Then they can decide the specs for their bridges, and water supply infrastructure, and storm drainage systems, and transportation infrastructure etc etc.
What is the test of relevance for other (non-climate) models?
I'll just make one more point, which will be obvious to scientists and engineers and economists and financial planners and anyone who uses models. Many models can't be tested on observations to determine their relevance. Models are often used to help plan for the future, not always to understand the present or the past. Their relevance is determined by the extent to which they increase understanding of whatever it is that is being modeled.
The simplest examples, like models for aircraft design or bridge construction - are used to test for design flaws, to determine the materials needed, to work out the steps in construction, to see how the design will stand up to various stresses etc. In other words, to improve understanding of whatever it is that is being modeled. For many things, it's a bit late to wait to see if observations match the models. Do you build and fly an aircraft thousands of km over years and take observations before deciding if the model is relevant?
If you're wanting to read about climate models, Scott K. Johnson's article at ArsTechnica is the article I generally recomment.
From the WUWT comments
JKrob can't imagine that people who build roads and water reservoirs and drainage systems and telecommunications infrastructure would need projections of precipitation, or temperature or flood likelihood or drought. I'm guessing he/she is a conspiracy theorist, too:
January 5, 2015 at 6:36 pm
“Pressure to use (downscale) techniques to produce policy-relevant information is enormous…”
Interesting, but not surprising. ‘Pressure’ from whom – management, specific governments, UN…others??
RomanM writes a one-liner. He's a denier statistician I believe, who I guess doesn't use models (or has no faith in his models, or doesn't think his models are relevant or extend understanding of anything)
January 5, 2015 at 6:40 pm
Lipstick on a pig… and a not-so-good looking one at that….
Andres Valencia wouldn't have a clue about climate or models or relevance or understanding but can't resist adding a meaningless comment. He also shifts from laughter to weeping readily. Hysteria?:
January 5, 2015 at 7:54 pm
“whether it improves understanding of climate change in the region where it is applied.”?
This must have come out of the “Humor” section of the paper, it’s just a joke.
Oh, wait, there’s no “Humor” section in this paper.
Thanks, Willis. I had to laugh, then cry.
Tom Trevor has a grand idea. Imagine building an economics model, starting with a single individual's random purchase of a packet of chewing gum:
January 5, 2015 at 9:09 pm
It seems to me that to Upscale would make more sense. First try to make an extremely accurate model of local weather over a very short period of time. Say something like this: It is now 65 degrees and 74% humidity on my porch I predict, based on my model that one minute from now it will be 65 degrees and 74% humidity on my porch. If over time your model show skill, then expand it in space and time, if still show skill expand it further. Eventually you might work it up to a global model of the climate in 100 years, but before it gets there it would have to show the ability to reasonable predict regional weather over at least a month. Working from future global climate to future local weather seems working backwards to me.
michaelwiseguy's comment is fairly typical of most of them at WUWT:
January 6, 2015 at 12:35 amThat's enough. You get the picture. You can read more here if you want to waste a few more minutes of your life.
Are we talking about natural climate change or that mythical man-made climate change everyone is talking about?
Hall, Alex. "Projecting regional change." Science 346, no. 6216 (2014): 1461-1462. DOI: 10.1126/science.aaa0629 (subs req'd)
"Climate science deniers in the main, do not understand why models are used in science." I can say that what climate science deniers DO think they understand was mostly presented to them by denialist literature, I can't find much evidence they have come to their conclusions on their own.
ReplyDeleteThat being said I do try to get across the idea that climate models are experiments that provide empirical evidence. As far as I know they are the only experiments that can be done on climate?
One of the main points of Alex Hall's article was that planners and policy makers are pushing scientists hard to tell them what to expect from climate change at the regional level. The planners and policy makers aren't going to put all their plans and policies on hold for fifty or sixty years while waiting to see if a regional model run from 2015 turns out to be relevant. They would much rather the scientists let them know if the regional model increased understanding of climate at the local level. Then they can decide the specs for their bridges, and water supply infrastructure, and storm drainage systems, and transportation infrastructure etc etc.
ReplyDeleteWe are doing this because there is such a strong demand from companies, (local) governments and companies to know how climate change will affect them. It is very hard and I am not sure whether what we do already has the quality you would normally expect from science, but done right it is better than nothing. And I say this as someone who chairs a session at the EGU on downscaling. (Abstract deadline is tomorrow, if I am allowed to make this small advertisement. It is a beautiful topic, but also very very hard.
It is somewhat bizarre that the people at WUWT complain about downscaling. You need it to adapt to climate change. If you do not know what to adapt to, you need to adapt to every possible change is very expensive. Any minor improvement in local predictions makes adaptation cheaper.
Normally people at WUWT & Co. are against mitigation and claim that we will simply adapt. They might not know what they are saying.
Personally, I trust the large-scale climate science a lot more, the science we need to judge how much we would need to mitigate.
Victor, your final sentence rings clearly true to me. My pushback is the political angle in addition to the growing market demand for a different product. I'm ok with CMIP5 presently being within 2 standard deviations of expected variability in observations. But my own math, if correct, also tells me that observations themselves are currently inside the 1 sigma envelope with respect to long term trends ... yet still we hear much ado about The Pause.
DeleteJust to be clear, I lament more than I challenge here, but I am interested in your political read on my points.
I am not sure what I should respond to. The global mean temperature is within the uncertainty (which is larger than the CMIP model spread), there is no statistically significant change in the trend.
DeleteIt is naturally still interesting to study why the global mean temperature is currently below the trend line and below the mean of the CMIP projections. If you understand the relationship between X and Y, it is still interesting to study how A, B and C influence Y.
There is much interesting work on that; a main contender seems to the be the heat uptake by the oceans. It will usually take a few years till this is sorted out. The deviation is just one or two tenth of a degree over a short period, studying such a minor deviation is a lot harder than the global warming of 1°C over a century we mainly studied up to now. I am actually quite surprised we seems to be able to tell something about the reasons for this small deviations. Expect more new arguments in future and expect a clearer idea which ones are important.
Victor,
DeleteMy stats are limited to first and second year level for health science majors, so I'm not confident in my own calculations. Also, if how I am saying things is incorrect, please let me know because this is something I wish to properly understand and communicate. Looking at GISTemp LOTI, I find November's 2014 anomaly to be ~0.05 K below the my predicted value based on regressing monthly temp anomaly and C/C0, Co = 280ppmv. 1-sigma = 0.11 K when calculated against a trailing 12 month moving average.
CMIP5 is ~0.20 K higher than the November GISS 12 month MA. I believe that's inside the published 95% model spread. What I'm saying I'm comfortable with those results -- if they hold water -- others are not. We won't please everyone of course, no matter how "good" the results.
Per your suggestion I'll look into observational uncertainty, I was not aware that it was higher than the CMIP5 ensemble spread.
I agree it's interesting understanding why The Pause. I lately focus mostly on oceans, and at present, mostly on data attempting to figure out the magnitude of "normal" variabilities. Last night it was comparing global SST to 0-100m temperature from NODC from 1955-present. On an annual basis I find they can range in difference from each other in total by almost half a degree. Averaged over the entire surface, I worked out that it's just under half a W/m^2 as well (0.5 K / 0.8 K/Wm-2 * 0.71). Typical values are as you say, one or two tenths a degree. Lots of uncertainty noise in those data though, I'm sure.
I'm nowhere near understanding or being able to discuss causes of those fluctuations beyond basics: the energy isn't evenly distributed and pockets of it move around in unpredictable fashion. It nets out to zero over time because that's what equilibrium systems do. Simple, even though it really isn't.
I'll keep my eyes peeled for new arguments. Hints as to what are the important ones when they post will be appreciated.
One way that we test downscaling approaches is to model the past. For example there is a large project called CORDEX that is downscaling simulations for 1950-2100 over most of the inhabited regions of the world. By comparing the 1950-present period to observations, we can get an idea of whether the models are capable of reproducing current climate, including observed trends. This gives us some confidence (but does not prove) that the projections into the future have some usefulness.
DeleteVictor is correct that this problem is very hard and it is not at all easy to get it right, for some value of "right." But you should submit your abstracts to our session instead of his. ;-)
Brandon, it sounds like you would like to repeat the attribution exercise of Muller of BEST. You could look there how to do it better, I expect. But even then, this method is not really recommended, you will probably be able to find some sarcastic comments on the BEST attribution in the net.
DeletePeople should submit their abstracts to Raymond's session if they are about the results of dynamical downscaling. If you are interested in methods and statistical downscaling, ours is IMHO somewhat more appropriate.
Unfortunately, the deadline has passed. (I could imagine it will be extended, the deadline was very early in the year, in many countries this week is still holidays and the number of abstracts is much lower than last year.)
Raymond,
DeleteThank you for the alternative reading, your synopsis contains things which are of great interest to me.
Victor,
Limited as I am by my lack of stats, the nuances between the BEST method and others re: attribution would presently go right over my head. OTOH, they operate in the city containing my home ... I sometimes wonder if they give tours. :)
Most sarc I see on the 'net related to BEST is from Mosher himself.
Both: I'm a long way from writing abstracts or indeed doing any novel work at all in this field, I simply do not have the background or training for it. Interest, yes. I devote much time to study, playing with data in ways I know how (computer databases are my professional competence) mainly out of native curiosity and fascination, but also as something to help ground me when I communicate online. I doubt I'll make much difference either way by doing so, but I don't want to write incorrect information. I have a bad habit of filling in knowledge gaps with speculation when I should say, "I don't know". If I am dead wrong about something, please let me know that in no uncertain terms.
It is a shame that Willis didn't look into this in greater detail, because he would find that those who work on downscaling (as I have) routinely estimate the accuracy of downscaling schemes using a hold-out set. For instance, see
ReplyDeletehttp://dx.doi.org/10.1002/joc.1318
Which uses the periods from 1958-78 and 1994-200 for model calibration and 1979-73 for validation. The climatologists that work on this sort of things do know about model evaluation, the above paper was the product of an EU project largely focussed on evaluation of downscaling models. They do this because accurate models are more useful for helping them to increase their understanding of climate.
You can't validate GCMs themselves as easily, but in the absence of a time machine, it is an unreasonable expectation. Personally I would rather be guided by science that is sufficiently solid to construct a physics-based model than science that can't even manage that.
Sou/All,
ReplyDeleteSomewhat OT, but I have been wondering a bit myself of late why the longer-period internal variability indices like, say AMO, have not been more used in hindcasting and nearer-term projections (10-20 years) for the global ensemble scenarios. At a very high level I gather that it has been deliberate protocol not to do it -- initialize the thing in the pre-industrial and see what it does when real-world radiative forcing changes begin to take hold ... this does make sense to me. I've also read Gavin Schmidt over at RC saying that retasking AOGCMs to decadal forecasting with such parameterizations has met with mixed success, but IIRC that article is now somewhat dated.
Is it anyone's sense that downscaling is how we're going to get to more skillful decadal projections -- dare I even say forecasts? Does CMIP6 intend to do things differently on this point?
Anyone with perspective or links to stuff to read at an intermediate level would not go unthanked ... most stuff I've read is either too simple and high-level or far too technical and detailed, thus it goes "whoosh ..." Thanks.
Sou, PS: I missed your link to Johnson's ArsTechnica in my first scan. Great resource, thanks. It confirms some things I thought I already understood and has the beginnings of answering my questions. I'll link hop from there. Still, I am a sponge for this topic right now so anything anyone else has got will be eagerly consumed.
DeleteBrandon, re near term projections, try this article, by Jeff Tolleson in Nature - from 2013:
Deletehttp://www.nature.com/news/climate-change-the-forecast-for-2018-is-cloudy-with-record-heat-1.13344
I gather that initialising to a recent period is okay for a bit, but then the models start doing their own thing again. Same as weather forecasting.
I'm not sure what you mean about indices, AMO etc. That sort of thing is an emergent feature of the models, not something that's keyed in.
Brandon, downscaling and decadal prediction are two different things.
DeleteFor weather prediction you have to initialise the model with the current state of the atmosphere. For decadal climate prediction you have to initialise the model with the current station of for example the oceans and cryosphere (ice). Which components of the climate system is how important is not clear yet. Currently there is some skill some years ahead in the midlatitudes. In the tropics it works better.
I am in a German project and we are trying to determine who important the hydrosphere is (the water on land, especially the water table). For now the main problem is getting hydrological models that can simulate the water table realistically running for such large areas and periods.
More information on this project, the full project.
This way of climate modelling is intended as prediction, you take the current state of the climate and extrapolate. Fortunately the influence of the emission scenarios is not that large if you only consider the next 10 to 20 years. That economists cannot make predictions for the emissions is an important reason that climatology normally works with projections.
Sou, that Nature paper is exactly the sort of thing I'm looking for. I understand you don't understand what I'm saying about AMO ... I don't quite yet understand what I'm asking either. Sometimes I think out loud in public.
DeleteVictor, thank you for the link to the project you're working on.
Both: So I grasp the basis of weather being an initial values problem, climate being a boundary value problem. My introduction to the distinction in those terms was May last year on Judith Curry's blog of all places, in a post where she was covering the debate between Lacis and Bengtsson on the topic. Andy's argument there makes more sense, so that's my current understanding of how things are.
I'm idly suggesting a "trick" with AMO and/or something like it, as a constraint over the short term projection. A hint if you will. One option would be to parameterize it in the GCM run, the other as a post hoc adjustment to the gridded output based on fancy regressions and interpolations. The result to be a hybrid probabalistic long term weather forecast masquerading as a climate projection, or the other way 'round. To make the natives happy. Maybe it would even be useful. I don't know, I do have really dumb ideas from time to time, but this one won't go away.
I know I'm not the first person to have thought this, but I've not seen a discussion of why or why not that approach simply would not work -- or even if it's actually been tried.
Victor, PS: your last paragraph. You're in my head about extrapolation, in this case saying "based on what AMO has done in the past" and carry that forward in terms of magnitude and timing as an input parameter to a GCM. I totally get it why climatology works for the long term, that's the boundary value part of the problem and I'm fully behind the scientific rationale for why it is and should be continue to be used.
DeleteIf your statistical model of the AMO is better than the dynamic climate model, you could combine them in the post-processing, like you suggested. You may also be able to force the model towards the statistical model, although that is likely not trivial.
DeleteParameterisations are for small-scale processes, not for large-scale climate modes. At least the way we do it now. I am dreaming of a new type of "parameterisation" that detects large scale patterns (fronts, highs and lows, large convective systems) in an atmospheric model and then tries to improve known deficiencies in such a pattern.
Victor, thank you for confirming that GCMs won't take something like AMO as a parameter as currently designed, it was an open question in my mind. I was suggesting forcing the climate model toward the statistical model, more like nudge or "hint". I will continue reading, now in a more correct direction and a little wiser for it, I appreciate your feedback.
DeleteI suspect Willis and his various mates who tremble in fear of their whole world government boogeyman let that wash over until they can't think sensibly about local governments.
ReplyDeleteHow on earth do they think engineers can design a bridge or a sewage processing facility without projections of likely impacts during the lifetime of the structures? It makes a world of difference to the design and to the materials required for a bridge that you expect to have a lifetime of 100 or more years. If a sewage processing facility has an outflow to a lake or ocean, it's pretty important to know which locations are more or less likely to be overwhelmed by tides or floods during its expected operating life. When thinking about expanding your rate-paying base, it's a good idea for area authorities to know which areas proposed for residential developments or industrial buildings are subject to subsidence or to flooding or to storms so you can avoid total failures, and also to set appropriate building standards. Or maybe these people don't know that standards for building roofs in cyclone/ tornado prone areas are different from those in areas subject to heavy snow loads and also different from those in areas at risk of bushfires and more complicated again where there are combinations of different extreme burdens according to season.
adelady,
Delete"How on earth do they think engineers can design a bridge or a sewage processing facility without projections of likely impacts during the lifetime of the structures?"
I believe theirs is a simple answer; those kind of engineers have a proven track record of having been able to do so. I see it as an issue of trust -- buildings don't normally collapse, therefore there's no need to scrutinize the methods behind the success.
If you think about it, we probably have a similar heuristic. Just last night I invoked it myself in a nastygram to Willis:
------------------
Willis: We have no evidence that any of these models have been tested, verified, or validated.
Me: What kind of evidence do “we” require? To what mailing address does one send such evidence? How fast do “we” need it? How long will “we” require to review and accept said evidence and accept or reject it? Have “we” ever heard of peer-review?
Do “we” understand that major scientific journals with lots of money and prestige at stake do not publish crap on a whim?!?
I guess “we” really just don’t get it after all. Color “us” shocked.
Willis: And it is most probable that they were built by true believers.
Me: As opposed to false believers.
------------------
This in his recent "A Neutral View of Oceanic pH" WUWT post, the context is coral bleaching specifically due to acidification. I had given him a cross section of papers from lab to in situ to modeling work as an anecdotal demonstration that the bases are indeed being covered.
I may as well have been speaking Swahili.
By the recent comments on the WUWT article "On the futility of climate models" it seems engineers tend to not like climate models.
ReplyDeleteEngineers are second only to physicists in their arrogance that they know better about other specialities than those working in the subject. :-)
DeleteA colleague of mine works in cosmology, which, while it is a perfectly legitimate scientific field, attracts more cranks per square centimeter than almost any other subject.
DeleteHe has noticed over the years that a remarkable percentage of the crank screeds he gets are authored by retired engineers.
As to your comment about physicists: as a rule a very competent physicist is more capable at a given discipline than the bottom 25% of most other scientific fields :-)
DeleteThe problem with Engineers is that they are under the illusion they "get" physics...
Great line in the new Steven Hawking movie:
"Cosmology is a form of religion for intelligent atheists"
That is quite a nice line! However, it's no longer as accurate as it used to be.
DeleteThe measurements of the cosmic background fluctuation spectrum by (largely) the WMAP and Planck satellites are simply astonishing -- a complicated-looking bumpy curve generated by theory before the fact matches observations with astonishing fidelity. The very origins of the universe and the structure beyond the horizon is still subject to a lot of speculation, but the overall big bang picture is by now on an extremely firm footing, similar to the heliocentric planetary system. It's really that good.
Regarding physicists, this summary is close to the mark.
DeleteNot all physicists (in fact not most) are like this, but enough are that "physics arrogance" is a searchable phrase.
Yes indeed, the WMAP and Planck results are incredible given what they represent, not to mention the incredible technical achievement of sending a satellite to a Lagrange point, cooling the detector down to a few milli-kelvin above absolute zero while maintaining temperature uniformity at the micro-kelvin level....
DeleteThe xkcd is a classic however the physicists that are typically guilty of such behaviour are usually those that have gone emeritus...
I remember seeing a talk by one of the WMAP team after it was launched, but before they announced any results. It was entirely about the satellite and especially the painstaking steps taken to eliminate systematic errors and ensure accurate calibration. I think many of my colleagues found it a bit boring, but I sure didn't -- it was a masterful experiment, and hearing that level of detail gave me much more confidence in their ultimate results.
DeleteThe jaw-dropping moment for me was when he showed the raw data from a single rotation of the satellite, which I think took a minute or two. It showed the dipole anisotropy as an obvious sine wave in all the channels -- this, which had taken over 10 years to be established after the CMB discovery!!
@Palindrom
DeleteThe Planck results ranks up there as one of the greatest intellectual achievements of our species.
I presume you are aware that it is theoretically possible to improve the sensitivity of the detector to the magnatude of the CMB temperature fluctuations but as far as the angular resolution goes, we are at the physical limit.
Well, it's complicated. The ground-based BICEP has a much larger antenna than Planck and hence is able to go to smaller angular resolution, and indeed it sees higher-harmonic peaks of the baryon oscillation just as expected. (As an aside, their highly-touted claim of seeing the polarization signal of inflation some months ago may very well be an artifact caused by foreground dust, especially in the Magellanic stream -- an unfortunate byproduct of their South Pole location, where they can only see half the sky).
DeleteWhat you may be thinking of is that a better experiment will not give us a better read on the cosmological parameters, because we have only one sky to look at, which reflects a single "throw of the dice". We've mapped it as well as we ever can. What we'd really like to do is to see a entirely different realization of the microwave sky -- say from a galaxy a billion light years away -- or even better, a very large number of such maps. One should "never say never", but it's a pretty safe bet that we'll never have that information, so we'll just have to make do with what we have.
This is a golden age of science, comparable or greater than the amazing intellectual ferment of ancient Greece. We are privileged to be alive at this time.
I think people on this thread should consider that it's hard for physicists not to appear arrogant when they understand things as well as they do ��
Deleteandthentheresphysics
DeleteIt is not that they "appear" to be arrogant! :)
I should note that the emoticon I tried to put at the end of my previous comment failed. First time commenting using an ipad. Apart from that small error, I normally don't ever make any mistakes ;-)
DeleteHmmm. Reminds me of an old Mac Davis song that I swear was my Dad's theme song: Oh, Lord, it's hard to be humble when you're perfect in every way..........
DeleteFLWolverine
Or the classic "I once thought I was wrong, but I was mistaken".
DeleteBetter than the other classic "I used to think I was indecisive, but now I'm not so sure."
Delete@palindrom
DeleteI was referring to fundamental astrophysical limits having to due with the small scale foreground microwave sources that have to be subtracted. It was nicely explained in a write up that I can't seem to find now. Basically Planck is just about as good as it will get, future progress will have to be made on the polarization front...
Flakmeister,
DeleteYes, IIRC, it's called the "confusion limit", but I may just be confused :-)
I certainly was...
Deleteandthen -- I believe you're correct.
DeleteI've always wanted to start a radio astronomy company called "Confusion, Ltd."
Oh come on, there's no limit to confusion.
Delete"Do you build and fly an aircraft thousands of km over years and take observations before deciding if the model is relevant?"
ReplyDeleteActually they do do this (it may be several months to a very few years).
But I tell you what, go ahead and take a 100% computer designed jet (meaning sans any scale model tests), pay to be a passenger (and sign a mandatory death waiver) on that very first (with zero actual flight tests) commercial flight at full 100% design speeds into a raging thunderstorm even.
You see, we don't need no stinkin' FAA.
As to structural engineering, the term commonly used is fidelity, as in high fidelity, something that is seriously lacking from global AOGCM's where they all use the term "skill." Structures work, for the most part, based on the assumptions inherent in being of high fidelity.
The high fidelity of a regional AOGCM can only be as good as the high fidelity of the global AOGCM's.
Also, the parameterizations of the global AOGCM's must be (or should be as close as possible to) scale invariant, and well, you see, the current parameterizations are abjectly are NOT scale invariant (given the very course spatial resolutions). That that will forevermore be a "feature" and not a "bug" of all complex AOGCM's.
Scale invarance is best looked at, for example, as what an ant can do versus what a human can do strength wise. You have to do something called dimensional analysis to understand what is important at the scale of an ant versus what is important at the scale of a human.
And please, don't even get me started on all the applied mathematicians and engineers who developed all the numerical methods that climate scientists are now using. IMHO, kind of like giving toys to a child.
"..kind of like giving toys to a child."
DeleteA bit condescending and arrogant don't you think? Kind of confirms Anons assessment above.
Is it difficult to use numerical methods? No. Are there fast computers to do it for you? Yes. Do you consult with others (mathematicians) to check the method and results? Yes.
Ooh, let me guess, Everett is an engineer (or a physicist).
DeleteYou're a funny man, Everett. Tell me that a full sized commercial airline is built and operated commercially, without any computer modeling first.
If you can do that, then tell us how you would go about building a full sized earth to test.
And I believe you are wrong about the regional models. they include features to scale that wouldn't show up on GCMs, because of the coarser scale. That's the whole point of downscaling, isn't it?
On another note - since I probably won't manage to write about it, or not for a bit - strangely enough after Willis' dumb article at WUWT, there's one that sort of contradicts it (by anecdote). Some engineer was talking about how he made a big mistake by extrapolating from small to large scale - and it didn't work. Nothing as complex as a model of earth of course.
" You have to do something called dimensional analysis to understand what is important at the scale of an ant versus what is important at the scale of a human."
DeleteOh, my! Now we're getting a lecture presuming to inform us about things I teach my intro classes on the very first day!
It must feel wonderful to be so certain that anyone who works in a different field is automatically an idiot.
Some engineer was talking about how he made a big mistake by extrapolating from small to large scale - and it didn't work.
DeleteSou
Reminds me arguing with someone who insisted that climate models could not work until you reduced the grid to 1 cm. (Yes, 1 centimetre). This was something to do with his "model" of his flask in the lab had problems.
However much anyone pointed out that the flask was not comparable to the earth and a grid of 1 cm would take approximately 1 trillion gazillion years to compute he insisted he knew what he was talking about.
Probably the same denier engineer. But reversing his argument to fit the current set of facts of the moment.
I'm just a dumb canuck, but can some one explain exactly how dimensional analysis allows one to to determine what is important at the scale of ant vs. that of a human? I am all ears...
Delete"Sceptics" criticise "warmists" for exhibiting groupthink, yet they routinely exhibit it themselves. I speak as one who's a sceptic, firmly in the "show me" camp, and who instinctively distrusts those in the "believe me" camp. Don't just tell me - prove it.
DeleteI'm not sceptical about numerical models in general, but I am about GCMs. Their predictive skill is clearly not very good. The wide divergence of projections into the future shows that. GCMs cannot be accurate, even perhaps reasonably accurate, because there's much we don't know about climate, the individual factors, and how they interact. I think the IPCC FAR was right when it stated that climate was too complex and chaotic to be modelled accurately (or perhaps at all). Nothing I've seen or read since that order. has convinced me otherwise.
However, that doesn't necessarily mean there shouldn't be any attempt(s) made to model climate, globally or on a regional or local scale. A model projection is better than no projection at all. What matters is how much credence is given to the projection(s), and what is at stake. A complication is that models which hindcast climate reasonably well, tend to forecast climate less well, and those which appear to forecast fairly well, are generally worse at hindcasting. The idea that taking the average or median of an assemblage of models (only one of which can even resemble future reality, because of overall divergance) is "the best we can do" is clearly wrong, in my view. What can be done to remedy the situation is unclear, apart from improving the models, which would simply reduce the divergence, and slightly improve the accuracy of the "average".
Finally getting to the point, I've come to the conclusion, after reading this post, and much cogitating, is that climate models can play a role in modelling regional climate; the variables are fewer (or vary less), and thus the interactions are more limited in result. It should be possible to tweak the models to reflect regional factors better, and produce output of some worth. Am I right, or just not wrong?
" "Sceptics" criticise "warmists" for exhibiting groupthink, yet they routinely exhibit it themselves."
DeleteI think we can all agree on that. The larger point is that the evidence for warming via anthropogenic greenhouse gas emissions doesn't rely on models. I suspect that your skepticism is closer to denial of general climate science.
Everett's example of testing aircraft before going to mass production is the wrong analogy.
DeleteThe notion in question is that one can judge the *relevance* of models by the extent to which they increase understanding. Before a test prototype (eg of aircraft, motor vehicle) is constructed, there will have been modelling done of various design aspects, particularly novel ones that haven't previously been tested. The models would have been considered useful if they increased understanding.
I doubt too many manufacturers would go to the expense of building a prototype or test aircraft, without some modelling first.
It would be very difficult to build a prototype of earth, which is why, when we inadvertently conduct climate experiments, like seeing what happens when we add CO2 or CFC's to the atmosphere, we use on the only earth we have.
FWITW: Everett F Sargent is a retired civil engineer.
DeleteMostly says: "I speak as one who's a sceptic, firmly in the "show me" camp, and who instinctively distrusts those in the "believe me" camp. Don't just tell me - prove it."
DeleteWhat evidence or proof would you accept that takes you from the "show me" camp to the "I get it now" camp?
Any convincing evidence. I said I'm a sceptic - I didn't qualify that with exactly what I might be sceptical of. It doesn't mean I'm a non-believer, just that I need proof. Shouldn't we all be so? Should we devolve our thinking to others? Scientists and researchers in general are just as fallible as the rest of us.
DeleteGet precisely what now? What is "it"? Do I have to buy into some whole "bag of tricks" completely, in every detail, or be labelled "a denier"? To be sceptical is to question everything, until evidence is sufficiently conclusive to lead to acceptance. Lemme tell you a story...
Some years ago, I'd read a fair bit on "sceptical" blogs about how sea-level wasn't rising as fast as satellites and research papers showed. I bought the line, until I started checking for myself. I found that sea-levels are rising at very different rates around the globe, even falling in some places. I built up a database of several hundred gauge records, and now being in a position to report objectively, blogged and commented on what I'd found. It soon became obvious that many "sceptics" didn't want the truth - it was inconvenient. It was "siding with the enemy". If reporting the truth means I'm "siding with the enemy" then to some, the enemy IS truth.
Hello MostlyHarmless
DeleteCongratulations on doing a bit of investigation of sea-levels and satisfying yourself that the "sceptical" blog was not reliable. And extra points for seeing the "sceptics" did not want to discuss your investigations.
You have indicated that you demand high levels of proof and a lot of personal effort to satisfy yourself of what is correct. Life is too short to be able to do this about every detail though. You have to develop the skills to judge which sources are trustworthy and to tune your nonsense antennae. That is a whole lot more effective and quicker than re-inventing the wheel all the time.. So, yes, to some degree we all devolve our thinking to others. Or as I would put it we stand on the shoulders of others. There is nothing wrong in that so long as you keep enough scepticism to know when and what questions to ask.
Good luck in your journey of discovery.
I've investigated a whole lot more than just sea-levels! Agreed, you have to judge which sources to trust, but after reading upwards of 500 research papers (on a range of topics) in detail, and checking many of the arguments, statistics and references I'd say my level of trust has fallen rather than risen.
DeleteMostlyHarmless : It's clear that the level of trust in climate science within the scientific world in general has not fallen, and they are at least as qualified as you to judge the rigour of scientific research.
DeleteIt has to be considered that your falling trust is due less to flaws in the research than to its failure to confirm an already fixed opinion.
Cugel
DeleteWell, MostlyHarmless did say that (s)he went along with the "sceptical" line about sea-levels and then changed his/her position. So maybe (s)he does not have fixed opinions. I am not sure (s)he quite fully takes on board what is meant by trusting a source. (S)he has in my view a rather rigid sense of black and white and wants to be able to pigeonhole everything with certainty. This is usually what you find on denier blogs. Certainty is their friend.
There is another skill I did not mention and should add which is the "go along with until you know better" aptitude. That probably is one of the most useful abilities - just accepting some ideas until something better comes along, or something contradicts it. Hold it in the picture while it is useful for understanding. Not everything has a complete answer and we have to be able to hold a range of possibilities and uncertainties.
Jammy Dodger : I'm thinking of an over-arching opinion such as "this is being hyped" which is not as mind-killing as "this is an enormous hoax perpetrated by the One World Gummint". MostlyHarmless is clearly more intelligent than the average WUWTer (no great endorsement, since I'd say the same of my dog) and won't fall for every piece of transparent nonsense the way they do. That doesn't preclude their opinion being fixed.
DeleteMy opinions certainly aren't fixed, they're constantly changing and evolving. John Maynard Keynes is famously quoted as saying “When I find new information I change my mind; What do you do?” or a variant thereof.
DeleteCugel said "It has to be considered that your falling trust is due less to flaws in the research than to its failure to confirm an already fixed opinion."
I said "...and checking many of the arguments, statistics and references I'd say my level of trust has fallen rather than risen.", which is unequivocal, and doesn't imply or betray any fixed opinion.
I gave the sea-level example as just that - an example. I agree with Jammy Dodger that one should
"go along with [something] until you know better" - "just accepting some ideas until something better comes along, or something contradicts it. Hold it in the picture while it is useful for understanding. Not everything has a complete answer and we have to be able to hold a range of possibilities and uncertainties."
Absolutely.
" A complication is that models which hindcast climate reasonably well, tend to forecast climate less well, and those which appear to forecast fairly well, are generally worse at hindcasting."
ReplyDeleteCitation, please.
Flakmeister -- It's surface-to-volume ratio arguments. If you take an arbitrary shape and scale it up or down, surface area goes as the square of the scale factor while the volume increases as the cube.
ReplyDeleteFor example, suppose you had a (large) ant that was 1 cm long, and you tried to make an ant 1000 times larger, scaled up exactly in every dimension; it would be 10 meters long! But it couldn't function -- here's one reason. Ants don't have lungs; their oxygen supply diffuses in from their surfaces. The 10-meter ant would have a surface that was 1000^2 = one million times larger than the 1 cm ant, but it would have a volume that was 1000^3 = one billion (American usage) times larger than the 1 cm ant. Each square millimeter of the large ant's surface would be asked to supply oxygen to a volume 1000 times larger than each square millimeter of the 1-cm ant's surface, and oxygen simply doesn't diffuse that fast.
Similar considerations apply to the structural strength. The cross-sectional area of the structural parts go up as the square, but the weight to be supported goes up as the cube. Eventually, it's literally unsupportable.
That ain't dimensional analysis...
Deletedimensional analysis
Deletenoun
Mathematics
noun: dimensional analysis
analysis using the fact that physical quantities added to or equated with each other must be expressed in terms of the same fundamental quantities (such as mass, length, or time) for inferences to be made about the relations between them.
Everett and palindrom seem to both be using the term to mean a common thing that makes sense. That the term is also used to mean something else doesn't mean that they are wrong (though it makes cross-discipline communication a bit harder).
DeleteSince you asked, straight from Wiki:
DeleteIn engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their fundamental dimensions (such as length, mass, time, and electric charge) and units of measure (such as miles vs. kilometers, or pounds vs. kilograms vs. grams) and tracking these dimensions as calculations or comparisons are performed. Converting from one dimensional unit to another is often somewhat complex. Dimensional analysis, or more specifically the factor-label method, also known as the unit-factor method, is a widely used technique for performing such conversions using the rules of algebra.[1][2][3]
Any physically meaningful equation (and any inequality and inequation) must have the same dimensions on the left and right sides. Checking this is a common application of performing dimensional analysis. Dimensional analysis is also routinely used as a check on the plausibility of derived equations and computations. It is generally used to categorize types of physical quantities and units based on their relationship to or dependence on other units.
Sorry, maybe there is a term in bio-mechanics for your ant analogy and limits to scaleability but dimensional analysis isn't it...
numerobis
DeleteNot so sure about that.
Their posts made sense but I am not sure they described "dimensional analysis".
Bingo...
Delete"Dimensional analysis" has several different meanings. The kind of example I gave uses one sense of the term; the standard unit-checking stuff (which is literally in the very first handout I give to my first-year physics students) is another.
DeleteIn physics, dimensional analysis is often used to refer to a kind of estimation skill in which one considers the dimensionality (i.e. units) of the answer, and the various factors that could possibly go into determining the answer, and then tinker-toys together factors to give the right units. It's not especially rigorous, but it can give great insight into problems. Victor Weisskopf of MIT was especially famous for this.
Well, I didn't mean to divert this thread, but, well someone did mention engineers in a pejorative sense, so, you know, I sort of went there.
ReplyDeleteF=m?
E=m?
Hmm, I'm missing somethings, what could those somethings be?
http://en.wikipedia.org/wiki/De_Havilland_Comet
Pretty sure there was an age of commercial jets (and military jets) that predates modern electronic computers.
Just, you know, sayin'
Now, let's also inform ourselves with something called Information Theory, shall we?
The basic principle there, is that it takes a finite amount of time to propagate something over some length, call that length an element, or call that my abject lies, for example.
Anyways, I won't bore you with the details, as you all appear to be engaged in groupthink or blind acceptance or something else you all call us DENIERS about.
Everett, fine to defend the honour of engineers. But other than that. What are you talking about?
Delete"Pretty sure there was an age of commercial jets (and military jets) that predates modern electronic computers."
DeleteDo you have any idea of how short the life expectancy of a military test pilot at, say, Edwards AFB was back in those days? And of those that lived, how many had to punch out of a broken prototype at least once in their career?
@EFS: I don't see any groupthink, just a bunch of posters who are confused by what you are saying, for good reason. And drop the arrogant tone; too many people here are well trained in science (and engineering) to be talked down to by someone who can't even write a clear post.
DeletePL:
DeleteYep, pretty much spot on...
Models were in existence prior to computers. Computers just made complex modelling a whole lot easier.
DeleteIt was people at WUWT who declared themselves as engineers who decided they knew a lot more about complex climate models than the people who have developed the models know. All they showed was that they didn't understand the climate models. (One chap decided that General Circulation Models boiled down to a combination of a simple equation for radiative forcing combined with a simple equation for climate sensitivity. He was wrong. His comment was elevated to a WUWT article, but he didn't have the first clue.)
I must be missing something, Everett. The de Haviland DH 106 Comet was produced based on a prototype (a full-scale model), but the design still had flaws. There were various prototypes/full scale models built after rejecting several design elements - on the basis of what? Some modeling by the designers perhaps? Wiki doesn't say. In any case, undoubtedly the models would have "increased the understanding" of what was feasible, practical etc. Which is the point in question.
DeleteAlso, the designers were not omnipotent - who is?
Our planet probably has flaws too, but it's the only full-scale model we have at the moment and for the foreseeable future. Simulations are just that, and the next best thing to a full scale test model. (I'd guess that most new aircraft would use computer simulations of various things before they even get to the prototype stage these days.)
Also ...
Delete"http://en.wikipedia.org/wiki/De_Havilland_Comet
Pretty sure there was an age of commercial jets (and military jets) that predates modern electronic computers.
Just, you know, sayin'"
Engineers. Metal fatigure. DeHavilland Comet.
Just sayin', you know.
Square windows.
DeleteJust sayin', you know.
Oh, and before I leave, here's something else to consider (or not):
ReplyDeletehttp://en.wikipedia.org/wiki/Sensitivity_analysis
(In the style of Everett).
DeleteIntravoxel incoherent motion - bet you haven't thought about that:
http://en.wikipedia.org/wiki/Intravoxel_incoherent_motion
Until we integrate intravoxel incoherent motion into the GCMs, we'll never be able to predict global warming since we won't be able to predict human actions. So we need to wait and see what happens.
DeleteWhatever you do, do not mention praxeology...
DeleteI won't mention praxeology if you don't mention praxeology. So let's neither of us mention praxeology, OK?
DeleteDid I hear someone mention praxeology? Don't get me started on praxeology ...
DeleteDamn... Time to employ a variation of the Jedi Mind Trick..
DeleteThese are not the Austrian economists you are looking for....
I repeat:
These are *not* the Austrian economists you are looking for....
If your software allowed, I would be commenting as Entropic Man
ReplyDeleteBiologists talk about the Power Function .
The "how to build an airliner" conversation is interesting but misses the point. Those who attack climate models would need to explain just how they would run real-world experiments to validate them given that we only have one planet and we are talking about events and effects decades into the future.
ReplyDeleteI keep thinking of global warming as a risk management issue. If there's a material percentage chance of really bad outcomes that we could prevent by taking reasonable countermeasures, we should take the countermeasures. It doesn't matter that there isn't a 100% or 90% chance of that outcome. You don't require a 100% chance that your house will burn down this year before you buy homeowner's insurance.
Off-topic, but relevant to this blog in general I suggest. I mentioned groupthink earlier.
ReplyDeleteA prime example of groupthink is the reaction to the so-called "Unified theory of climate" in several posts on WUWT a few years ago. I quickly recognised the "theory" as a combination of distortion, misunderstanding (accidental or deliberate?), misapplication and part-truths, some of those being just plain lies, after reading the first few paragraphs of the first post. Simple checking of references exposed more of all those characteristics, especially the part-truths and lies. I found no evidence in the comments that anyone had checked out references. A few truly sceptical commenters tore up one or more of the planks the "theory" was based on, but the general reaction was "I always knew that the GHE was a crock - all that crap about so-called 'back-radiation' - this is one in the eye for the warmists".
That truly is groupthink, and it's exhibited on "sceptic" blogs literally daily. If the "theory" had been presented as one supporting GHE and/or anthropogenic climate change, it would have been minutely examined for any kind of error or mis-statement. THAT is scepticism - blind acceptance of anything you agree with, or want to agree with, and which appears to destroy or dilute your opponents' arguments is not. Most posts on WUWT which question, make opposing claims about, or just appear to debunk so-called "warmist" or "alarmist" claims or theories or statistics are given what amounts to a "free pass" - little or no checking or validating is done by commenters. Over the last 4 years, Nils-Axel Mörner has claimed on WUWT and elsewhere that global sea-level "isn't rising", "is rising at 1.5 mm/year", and that sea-level in northern Europe has been falling since 1950 (or 1980, or...). No-one checks anything, no-one sees the contradictions (or at least comments to that effect). This isn't scepticism, it's blind chauvinism. None of his claims are true of course. If you simply disbelieved everything he said, and assumed the opposite, you'd be close to 100% right.
Is that "unified theory" article the one by Nokilov (?) and whatsisname, the forestry folks? If that's the one you're thinking of, it really set the gold standard for "not even wrong" analysis. It was bad. How bad? Gerlich and Tscheuschner bad!
DeleteYes it is, by Ned Nikolov and Karl whatsisname (instantly forgettable). Was it Gerlich and Tscheuschner bad? - no, much worse than that. G & T (not my favourite tipple) at least had a bit of yer akshuwell science in their offering. Nikolov and whatsisname had the backside of the moon at 0°K 'cos it doesn't get any insolation, and goes rapidly downhill from there.
DeleteVerification and validation of numerical models of natural systems is impossible.
ReplyDelete?????
DeleteThat is not a very sensible statement. Perhaps you should at least qualify it.
Climate models can only be evaluated in relative terms, and their predictive value is always open to question.
DeleteThat's neither a sensible statement nor a complete notion. Relative to what? Open to question? Anything and everything is open to question - science is arguably based on questions.
DeleteWhy are you playing the fool Everett?
DeleteNatural systems or climate models? Make your mind up.
If you want sensible discussion then random and silly statements are not a good start. If "someone else" made those statements then do not just blindly post them here without reason.
Else we may come to the conclusion you are trolling.
We all greatly appreciate the pioneering works of teh climate scientists and their machines:
ReplyDeletehttps://www.youtube.com/watch?v=fw_C_sbfyx8
Why does a video of early attempts to fly elicit appreciation of climate scientists?
DeleteThe primary value of climate models is heuristic.
DeleteWorking the troll lines pretty hard tonight, eh?
Delete"The primary value of climate models is heuristic."
DeleteThat doesn't make any sense - but then neither do a lot of the other comments. Sure, there are simplistic climate models like the various Bayesian estimates of climate sensitivity, but that's not what most people think of when they talk about climate models. The sophisticated physics-based climate models used as part of CMIP are anything but.
The above (3) are not my actual statements.
ReplyDeleteSomeone else originally made these statements.
A Futurist (of sorts) no less, but I prefer the term Boomer Doomer.
Are you trying to distance yourself from 3 stupid statements?
DeleteThat depends.
DeleteBy that, I mean that, the 1994 Science article is only cited like 2235 times (in Google Scholar):
http://scholar.google.com/scholar?cluster=15158395300312255637&hl=en&as_sdt=0,25&as_yhi=1994&as_vis=1
I would very kindly suggest that some people here might want to read the article.
I personally don't hold the exact same position/opinion that the lead author suggests, but I do consider it very interesting, nonetheless.
Your thoughts (after reading said article)?
http://courses.washington.edu/ess408/OreskesetalModels.pdf
Everett
DeleteYou quoted three statements out of context and now you want a conversation? Go away.
My thoughts, after looking at said article, is that said article is a long and fancy restatement of the truism that "all models are wrong, but some are useful".
DeleteAnd that you took all those statements out of context simply to annoy people, i.e., you're behaving like a jerk.
Go away.
Everett
DeleteModel-bashing is an old and shopworn tactic. Best find something else to make a contrarian fuss about or risk being marginalised along with the rest of the blowhards.
Here is well-known model sceptic James Hansen on the way it really is:
TH: A lot of these metrics that we develop come from computer models. How should people treat the kind of info that comes from computer climate models?
Hansen: I think you would have to treat it with a great deal of skepticism. Because if computer models were in fact the principal basis for our concern, then you have to admit that there are still substantial uncertainties as to whether we have all the physics in there, and how accurate we have it. But, in fact, that's not the principal basis for our concern. It's the Earth's history-how the Earth responded in the past to changes in boundary conditions, such as atmospheric composition. Climate models are helpful in interpreting that data, but they're not the primary source of our understanding.
TH: Do you think that gets misinterpreted in the media?
Hansen: Oh, yeah, that's intentional. The contrarians, the deniers who prefer to continue business as usual, easily recognize that the computer models are our weak point. So they jump all over them and they try to make the people, the public, believe that that's the source of our knowledge. But, in fact, it's supplementary. It's not the basic source of knowledge. We know, for example, from looking at the Earth's history, that the last time the planet was two degrees Celsius warmer, sea level was 25 meters higher.
And we have a lot of different examples in the Earth's history of how climate has changed as the atmospheric composition has changed. So it's misleading to claim that the climate models are the primary basis of understanding.
@BBD: Thank you for reminding us of this really important quote. It is the Earth's history and straightforward physical principles that cause concern, not specific models. We've known the sensitivity to CO2, to within a factor of two, for over a century.
DeleteThanks, PL. Although as usual, the credit correctly belongs to Hansen for spelling it out so that even teh likes of me can understand what is important and what is not.
Delete@Palindrome: thank you. Reading the article was like reading 20th century French philosophy, and I was on the verge of asking someone here for an explanation (assuming an explanation was possible, which is not an assumption that applies to French philosophy).
ReplyDeleteFLwolverine
I'm not the only one who finds some utility in discussing models (all models, past, present and future):
ReplyDeletehttp://www.rotman.uwo.ca/rotman-conferences/fall2014/
At the moment, I do find the subject matter very interesting.
There does appear to be a lot being said recently, at least in the ongoing Philosophy of Science department.
@EVS. You're acting like a jerk. Go back and address BBD's point just above, and other conversations where you just give drop and move on. Behave like an adult. Dropping random quotes and saying you "find the subject matter very interesting" isn't useful.
DeleteThis is a test. ClimateBall ™ This is only a test.
Delete@EFS: Go back to BBD's quote about Hansen and make a sensible point.
DeleteOr, if by playing the "ClimateBall" card, you are trying to tell us you're just here to while away your time and waste others' time, say it straight. Then the mods can move in.
"wile" or "while"?
DeleteOh, before I forget. Go away Everett.
And cut. And wrap.
DeleteAs we enter the post-Pausal era we'll be hearing a lot more about models being rubbish. And about the Medieval Warm Period. And thriving dinosaurs. And Al Gore's weight issues. And squirrels.
ReplyDeleteOh there's always a 'pause' for people who fixate on short term noise, it just changes its duration. I'm guessing there's been "no statistically significant warming" since 2008.
DeleteI first heard "no warming since 1998" in 2005 so you're right, 7 years is enough for a Pause. By 2007 I was hearing that "the world has entered a long-term cooling phase" with an Ice Age comething so I guess 7 years counts as long-term for some people (I suspect it's related to attention-spans and the shortness thereof).
DeleteSquirrels? Where??!!
ReplyDelete