Category Archives: Climate Data

All gone by the year 2020: is observed glacier melt the basis for the prediction?

This is part 6 in the series on the prediction that glaciers in Glacier National Park will be gone by 2020. You might want to see to part 1, part2, part 3, part 4 and part 5 if you haven’t already.

In a previous post, I walked down memory lane to the time I was about 19 years old. In this post, I will go back to the more tender age of 14 and my first trip abroad. It was a trip to Switzerland on the health insurance plan of my parents. Its intention was to give the working class youth the opportunity to breathe in the pure mountain air of Switzerland. We were a bunch of kids of the same age and, as you would expect, there were a lot of outdoor activities like sports that we could chose from.

Not being that much into sports and a more nature-minded boy, I often joined the hiking group that did nature walks in the vicinity of the center were we stayed over. At one point, there was the opportunity to see a glacier. I had learned about glaciers in school and was eager to see one. It was not that close by and after a stiff walk we arrived at the glacier. Boy, was I disappointed. We saw some melting ice chunks and a small patch of ice stretching along the mountain. Although it was nice to see some ice in that time of the year, it was a real anti-climax.

One of the guides then said that a lot of glaciers were shrinking and that this glacier was no exception. This was in the mid 1970s.

I was reminded of this scene from my youth when reading this paragraph in the Hall & Fagre paper “Modeled Climate-Induced Glacier Change in Glacier National Park, 1850–2100” (my emphasis):

Continue reading

Advertisement

The good old times when carbon dioxide and methane levels as well as temperatures were low

The previous post was about “the most popular contrarian argument” according to skepticalscience (“climate changes before, so current climate change is natural”) and what they seem to consider a live example of such a claim. I then proposed in that post that it actually was not a good example of what they want to prove.

What I didn’t discussed yet was how skepticalscience “debunked” this most popular contrarian argument. They did this in the “Climate’s changed before” myth page that was apparently based on this example.

They “debunked” this “myth” by stating that the climate is indeed always changing, but the difference is that it is changing much faster now than in the past because of our increasing emissions. This is how it starts:

Continue reading

Heat waves: extreme temperatures or just higher summer temperatures?

In previous post I ended rather abruptly by saying that one of the problems with the vagueness of the definition of a heat wave, is that they are difficult to compare. What I meant by that is that those of the public, who don’t catch the nuances of those different definitions, think that is being talked about exactly the same thing, while they are in fact distinctly different.

For example, when the weather (wo)men in our region talk about a heat wave, they mean a period of five consecutive days with temperatures over 25 °C of which three days with temperatures over 30 °C at Uccle (Belgium) or De Bilt (the Netherlands). We have no real problem with that because in our country we are used to 20 °C, 25 °C temperatures or even higher. But we don’t cope well with 30+ °C temperatures, especially when they last more than a couple days. These are rare occurrences in our region and we are not used to them.

Bottom line: when our weather services declare a heat wave, it is hot and we need to take measures to cope with such (rare) heat. That is the significance of the definition: it captures the extremes of a region. Extremes that could have an impact those who are weak like small children, old people or those who are ill. In our current societal structure and in our climate, we just aren’t accustomed to these temperatures.

Now enter that new definition of a heat wave: the average temperature of three consecutive days. There is no regional threshold anymore, because the investigators wanted to compare between the different countries with different local definitions. Now we see an upwards going graph:

Continue reading

Reliable measurements or pliable estimates?

The last three posts were mostly about the adjustments of the ocean data done in the Karl 2015 paper. This because the adjustments in ocean data had the biggest impact on the result (that there wasn’t something like a “hiatus”). Kevin Marshall of the excellent blog manicbeancounter.wordpress.com reminded in a comment on previous post that surface datasets had issues as well.

I could agree with that one, I also had written a post in the first year of blogging: Things I took for granted: Global Mean Temperature,, that described how my perception of a global mean temperature changed from believer until skeptic and why I had a hard time to believe that the (surface) datasets were accurate enough to capture an 0.8 °C increase in temperature over 160 years.

Reading it back I was a bit surprised that I wrote this already in my first year of blogging. But, in line with the Karl et al paper, there were two things that I think were missing in this early piece.

First, that the data in the surface datasets are not measurements, but estimates derived from the temperature station measurements. In a way that could be concluded from the uneven spatial coverage, the convenience sampling and other measurement biases like Urban Heat Island, Time of Observation and who knows what more. This makes that the homogenized end result will just be an estimate of the actual mean temperature.

Continue reading

Leave a known bias in the data or correct the data for the bias: a false dilemma

The Karl et al paper left me with more questions than answers. They used low quality, scarce, spatially incomplete data and it were their assumptions that made the difference, yet they seem to give it such a high importance that, even when contradicted by high quality data with better spatial coverage, they seem to be sure that their conclusion is relevant?!?! Other articles didn’t seem to be bothered by it, they just focused on the conclusion and pitied the “deniers” who again had to take yet another blow. This was no different for an article at Dailykos with the fascinating title As climate denier heads explode over the loss of the “hiatus”, one simple question shuts them up about a response from Tom Peterson to an email of Antony Watts. It was amusing to read that the author of the Dailykos artice thinks that “deniers” “lost” the hiatus, while it is still clearly visible in all other datasets. Beyond the hyperbole there was some insight of a scientist who actually contributed to the papern, so I could see how one of the authors of the paper justifies coming to this conclusion with such data.

This is the part where he explains it:

So let me give you two examples from our paper. One of the new adjustments we are applying is extending the corrections to ship data, based on information derived from night marine air temperatures, up to the present (we had previously stopped in the 1940s). As we write in the article’s on-line supplement, “This correction cools the ship data a bit more in 1998-2000 than it does in the later years, which thereby adds to the warming trend. To evaluate the robustness of this correction, trends of the corrected and uncorrected ship data were compared to co-located buoy data without the offset added. As the buoy data did not include the offset the buoy data are independent of the ship data. The trend of uncorrected ship minus buoy data was -0.066°C dec-1 while the trend in corrected ship minus buoy data was -0.002°C dec-1. This close agreement in the trend of the corrected ship data indicates that these time dependent ship adjustments did indeed correct an artifact in ship data impacting the trend over this hiatus period.”

The second example I will pose as a question. We tested the difference between buoys and ships by comparing all the co-located ship and buoy data available in the entire world. The result was that buoy data averaged 0.12 degrees C colder than the ships. We also know that the number of buoys has dramatically increased over the last several decades. Adding more colder observations in recent years can’t help but add a cool bias to the raw data. What would you recommend we do about it? Leave a known bias in the data or correct the data for the bias? The resulting trend would be the same whether we added 0.12 C to all buoy data or subtracted 0.12 C from all ship data.

That second example was the question that the author of the Dailykos article alluded to (and is also the subtitle): “What would you recommend we do about it? Leave a known bias in the data or correct the data for the bias?”. At first glance, it sounds reasonable, but I think it is a false dilemma. It leaves us with the apparent choice of:

  1. leave the known bias into the equation and get a wrong result
  2. correct the bias and get a correct result.

Option one is an obvious no no. If one is sure there is a bias, there is nothing wrong with trying to adjust it (when the strength of the bias is known). So, option two seems the only real choice and following that the result doesn’t support the “pause”…

But is this the real choice we have? I think it is the wrong question altogether, knowing that the conclusion depended most on the adjustments of those sea surface temperatures.

Continue reading

Now there is a pause, now there isn’t

There was quite a controversy in the last week around the Thomas Karl et al paper: Possible artifacts of data biases in the recent global surface warming hiatus. They came to the conclusion that their results do not support the notion of a “slowdown” in the increase of global surface temperature. in other words, there is no such thing as the “pause”.

Reading this paper, what first caught my eye were the datasets used:

The data used in our long-term global temperature analysis primarily involve surface air temperature observations taken at thousands of weather observing stations over land, and for coverage across oceans, the data are sea surface temperature (SST) observations taken primarily by thousands of commercial ships and drifting surface buoys.

To represent land data they used a land surface dataset. No surprise here, if one want to go back to the 1950s or earlier, there will not be much else. But this dataset is riddled with all kinds of problems like incomplete spatial coverage, siting issues, UHI and who knows what else. The error margin would be rather large.

When it comes to the sea surface temperatures, they used the observations of commercial ships and buoys. For those who don’t know it, sea surface temperatures were measured first by hauling buckets of seawater onto a commercial ship and sticking a thermometer in it. Later via sea water flowing through engine cooling water intakes and via drifting buoys. Also no real surprise, if one want to go back to a long time, there will not be much else to go with. But this comes with its own set of problems like incomplete spatial coverage, changes in how the measurements where performed, different materials, different ships, depth of water intake depending how heavy the ship is laden and who knows what else.

The elephant in the room is that high quality datasets, like the satellite datasets, are omitted. Both of the used datasets have issues with spatial coverage. The land dataset from weather stations collect only measurements in places where humans tend to live. The sea surface measurements via commercial vessels will only get temperature measurements in shipping lanes, not even at the same spot, not at the same time. For the buoys, only those that were drifting. The big question I have is: how good do those surface temperatures represent the real average surface temperature of the earth? How good do those bucket and intake measurements represent the real global ocean surface temperatures? Why would one want to change good data to agree with the bad? To me, that stands central in the whole issue.

Continue reading

The influence of the zombie thermometers

zombie thermometer

When I saw the response of NCDC press office on the questions raised by Tony Heller (aka Steven Goddard), Paul Homewood and Antony Watts on the reliability in the NCDC temperature network, it was not exactly what I was expecting:

Our algorithm is working as designed.

As far as I could understand the issue, it has to do with how their program (that calculates the average US temperature) works when temperature data is missing. Tony Heller claimed that 40% of the data is “fabricated”, meaning not coming from measurements. When there is no measurement for a certain station, the program makes an estimate by looking at the neighboring stations and assigns an estimate of that missing data from this. So far so good, but something went horribly wrong when the program called that routine for example when the underlying raw data was complete or, more mind boggling, when there were even stations that were closed for many years and yet estimates were generated for them.

Yet they claim that their algorithm is working … as designed.

Not a bug, a feature. Nothing to see here, move along.

Although this infilling is perfectly fine (mathematically speaking that is) in a reliable network, there is an issue with it in a system with many discontinuities.

Like surface weather station data.

Only five years ago I got drawn into the global warming issue when visiting the Surface Stations website. From that moment on I realized that the temperature measurement network was not really in good shape. According the current data, more than 90% of the stations had siting issues and will report temperatures with an error larger than 1 °C. It came as a surprise that there were many, many issues like heat sucking asphalt, stones, nearby buildings, air conditioners/external heat sources and what not more. All these influence temperature readings, upwards.

So in this case, if 90+ percent of the stations really has sitings issues and there is infilling from neighboring stations, how reliable would that infilled data be???

Does NOAA/NCDC fabricates data as being insinuated? Well, depends on the definition of “fabricate”. If it is willfully alter the data for a specific goal, then I don’t think that this is the case. But if it is creating data where there was no data before, yes, I think they are fabricating data and chances are high that the adjustment will be upwards.

Fine, but just don’t call it high quality data anymore…

Everything but the issue

When I first came across the Telegraph article The scandal of fiddled global warming data, it was unusual to say the least. The article was about blogger Steven Goddard who shows how, in recent years, NOAA’s US Historical Climatology Network (USHCN) has been “fiddling” with its records. He made the extraordinary claim that 40% of the data was “fabricated”: data was created that wasn’t measured. The effect of this has been to downgrade earlier temperatures and to exaggerate those from recent decades, to give the impression that the Earth has been warming up much more than is justified by the actual data. You don’t really find many of these stories in the mainstream media.

The real surprise were the comments. In no time the number of comments surpassed 1,000, then 5,000, then 9,000. Now there are even 12,500+ comments. They were about numerous things like: climate scientists/models being right or wrong, alternative energy, how many scientists agree or not, who does agree or not, melting of the Arctic, sea level rise, Obama, Democrats, greenhouse gases, fossil fuels, funding, all kinds of conspiracies, environmental control, Michael Mann, CAGW, the consensus, “97% of the scientists believe”, “trusted science” and so on and so on and so on.

I saw many commenters calling others loons, idiots, morons, scientific illiterates, bad spellers, demented, deluded, fraudsters, denialists,… Emotions got stirred quite a bit.

Some said that Steven Goddard was just a blogger, not a climate scientist and therefor one should not trust his analysis.

All those things could be very interesting or maybe even true, but there is one thing that I am missing in the comments. The only interesting question would be: IS IT TRUE? Does NOAA adjust(ed) the historic records downwards and the current records upwards? If so, what is the reason they do so? If NOAA didn’t do so, why was the Goddard analysis not correct? All other questions/comments are beside the point.

In science it doesn’t matter who said so. Even if it said by a farmer, priest or a lumberjack. The only thing that should matter is if he said the truth or not. The fact that someone isn’t a scientist doesn’t mean he is not telling the truth. Many historic scientific breakthroughs were made by people who weren’t scientists.

But that babble as seen as a reaction to this article is just a distraction of the real question: is what is being said in the article true or not and why?

I don’t know if the analysis of Steven Goddard is right or wrong. But would it not be more interesting to know if he is right or wrong, in stead of avoiding the issue by stating he is not a climate scientist or babble about other topics in global warming land. One can stay busy with that. 12,500+ comments long, to be exact.

Update
Apparently the extraordinary claim of Steven Goddard (who in the meanwhile also revealed his own name) seem to be correct after all. There could be a bug in the USHCN reporting system that unnecessarily calls a routine that fills in data from neighboring stations. It could be interesting to see how NOAA will react on this.

Of “deniers” and how not to convince them

The first thing that attracted my attention in the McGill article in which the Lovejoy paper was presented, was the puzzled polar bear on a small ice floe in a vast ocean. I have seen that photo many times before, but not in this setting.

I find it strange to see it here. First because it is an well known photoshopped image, used extensively by many global warming activists. Second, it has no bearing with anything in the article or in the paper (no mention of polar bears, nor of the Arctic, nor of the pole, nor even of melting ice). It is however a strong emotional image, which raises the question: who is the targeted audience here?

But yet it is there for whatever reason. A possible hint to the reason is following statement:

“This study will be a blow to any remaining climate-change deniers,” Lovejoy says.

He really said “deniers”. As a researcher he shows his bias here.

The part “any remaining” is a bit funny here (I guess he is hinting to the 97% consensus). Also the presumption that this paper would be a blow to skeptics (that is how he called them in the paper itself). As if this paper is going to persuade skeptics. Skeptics that always criticized the use of proxy data as well as historical weather data for “accurate” climate purposes. Now, just as in the Hockey Stick, he combines them both. That would consider as a blow that would knock die-hard skeptics out of their socks? That’s wishful thinking.

According to Lovejoy this is why it would be a blow to those “deniers”:

“Their two most convincing arguments – that the warming is natural in origin, and that the computer models are wrong” are either directly contradicted by this analysis, or simply do not apply to it.”

For the first argument that “the warming is natural in origin”, there are many questions arise from his research. How accurate was the global average temperature before the late 1970s when the data gathered was to measure local weather, not climate? With all issues attached like sample size, UHI, only min/max/median, little coverage on oceans,… How does he explain the fact that weather stations recorded their highest temperatures in the 1940s (when CO2 was still a fraction of what it is now? How does he know that there are no longer cycles that influence temperature like 400 years, 5000 years,… How unusual is 0.9 °C over more than a century? And the pause is telling us that something else than CO2 is working here. Something as strong as CO2. Although the CO2 concentration is the highest everrrr, there is no further global average temperature increase for a decade and a half.

In the end, do skeptics argue that the warming is natural in origin? I have the impression they argue that we are not in the possibility yet to know whether the warming is from natuaral or anthropogenic factors. Let alone how much of it is anthropogenic. We have to rely on the “opinion” of scientists who claim that they are more certain now, yet can’t provide hard evidence.

For the second argument, yep, the simple statistical model he uses is confirming the results of the climate models. But that doesn’t necessarily means they are both right. Especially when they are based on the same assumptions. More, the fact that a hypothesis is being confirmed in a (statistical) model, doesn’t necessarily mean that the same mechanism is also working in the real world.

It makes me wonder two things. Just as before with the “consensus”, I wonder what he is thinking “deniers” are denying exactly? He could be in for a surprise. Also, the use of the polarizing language and the emotional imaging being used, make me wonder how balanced this investigation actually was and what it was intended for exactly?

Comparing temperature measurements and proxy data … as in apples and oranges

lovejoy-apple

The paper Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming of Shaun Lovejoy was brought as a statistical analysis that rules out natural-warming hypothesis with more than 99% certainty. That seems impressive at first, but after reading it is more hyperbole than substance. It seems riddled with quite some logical fallacies. Previous posts were about cherry picking the time frame and about assuming the conclusion.

This post will be about two datasets that seem comparable, but are not.

Lovejoy looked at two periods: before and after 1850. Before that time he concludes that natural variation is at work, after that the influence of man. But the big question: are those two comparable? I would have no problem with acknowledging that the influence of man in the 16th century would be much less than the influence now, but it would be as easy to understand that the quality of the data will not be in the same league.

This is how he explained the proxy data:

To assess the natural variability before much human interference, the new study uses “multi-proxy climate reconstructions” developed by scientists in recent years to estimate historical temperatures, as well as fluctuation-analysis techniques from nonlinear geophysics. The climate reconstructions take into account a variety of gauges found in nature, such as tree rings, ice cores, and lake sediments. And the fluctuation-analysis techniques make it possible to understand the temperature variations over wide ranges of time scales.

He seem to assume that proxy data is temperature data! As far as I know it is not. Tree ring measurements for example are the result of a mix of temperature, precipitation, diseases, nutrients, competition, pests, weather events and who know what more. Sure, temperature is a part of the equation, but there are many others. Comparing this with thermometer measurements is comparing apples with oranges. It is the same as grafting the instrumental dataset onto proxy data. It looks nice and impressive, but it is meaningless.

How could this be even close to accurate or conclusive? First, as said above, both data sets can’t be compared because they are different. The instrumental data set is direct temperature data (yet homogenized, adjusted in order to extract climate data information out of weather data). The proxy data is temperature data mixed with other data, it is not pure temperature data. That is comparing two heavily processed datasets, each with their own uncertainties. Secondly, the proxy data is even more sparse than the weather data.

Whatever conclusion is made from this comparison, it shouldn’t be very convincing.