Category Archives: Climate Data

All gone by the year 2020: is observed glacier melt the basis for the prediction?

This is part 6 in the series on the prediction that glaciers in Glacier National Park will be gone by 2020. You might want to see to part 1, part2, part 3, part 4 and part 5 if you haven’t already.

In a previous post, I walked down memory lane to the time I was about 19 years old. In this post, I will go back to the more tender age of 14 and my first trip abroad. It was a trip to Switzerland on the health insurance plan of my parents. Its intention was to give the working class youth the opportunity to breathe in the pure mountain air of Switzerland. We were a bunch of kids of the same age and, as you would expect, there were a lot of outdoor activities like sports that we could chose from.

Not being that much into sports and a more nature-minded boy, I often joined the hiking group that did nature walks in the vicinity of the center were we stayed over. At one point, there was the opportunity to see a glacier. I had learned about glaciers in school and was eager to see one. It was not that close by and after a stiff walk we arrived at the glacier. Boy, was I disappointed. We saw some melting ice chunks and a small patch of ice stretching along the mountain. Although it was nice to see some ice in that time of the year, it was a real anti-climax.

One of the guides then said that a lot of glaciers were shrinking and that this glacier was no exception. This was in the mid 1970s.

I was reminded of this scene from my youth when reading this paragraph in the Hall & Fagre paper “Modeled Climate-Induced Glacier Change in Glacier National Park, 1850–2100” (my emphasis):

Continue reading

The good old times when carbon dioxide and methane levels as well as temperatures were low

The previous post was about “the most popular contrarian argument” according to skepticalscience (“climate changes before, so current climate change is natural”) and what they seem to consider a live example of such a claim. I then proposed in that post that it actually was not a good example of what they want to prove.

What I didn’t discussed yet was how skepticalscience “debunked” this most popular contrarian argument. They did this in the “Climate’s changed before” myth page that was apparently based on this example.

They “debunked” this “myth” by stating that the climate is indeed always changing, but the difference is that it is changing much faster now than in the past because of our increasing emissions. This is how it starts:

Continue reading

Heat waves: extreme temperatures or just higher summer temperatures?

In previous post I ended rather abruptly by saying that one of the problems with the vagueness of the definition of a heat wave, is that they are difficult to compare. What I meant by that is that those of the public, who don’t catch the nuances of those different definitions, think that is being talked about exactly the same thing, while they are in fact distinctly different.

For example, when the weather (wo)men in our region talk about a heat wave, they mean a period of five consecutive days with temperatures over 25 °C of which three days with temperatures over 30 °C at Uccle (Belgium) or De Bilt (the Netherlands). We have no real problem with that because in our country we are used to 20 °C, 25 °C temperatures or even higher. But we don’t cope well with 30+ °C temperatures, especially when they last more than a couple days. These are rare occurrences in our region and we are not used to them.

Bottom line: when our weather services declare a heat wave, it is hot and we need to take measures to cope with such (rare) heat. That is the significance of the definition: it captures the extremes of a region. Extremes that could have an impact those who are weak like small children, old people or those who are ill. In our current societal structure and in our climate, we just aren’t accustomed to these temperatures.

Now enter that new definition of a heat wave: the average temperature of three consecutive days. There is no regional threshold anymore, because the investigators wanted to compare between the different countries with different local definitions. Now we see an upwards going graph:

Continue reading

Reliable measurements or pliable estimates?

The last three posts were mostly about the adjustments of the ocean data done in the Karl 2015 paper. This because the adjustments in ocean data had the biggest impact on the result (that there wasn’t something like a “hiatus”). Kevin Marshall of the excellent blog manicbeancounter.wordpress.com reminded in a comment on previous post that surface datasets had issues as well.

I could agree with that one, I also had written a post in the first year of blogging: Things I took for granted: Global Mean Temperature,, that described how my perception of a global mean temperature changed from believer until skeptic and why I had a hard time to believe that the (surface) datasets were accurate enough to capture an 0.8 °C increase in temperature over 160 years.

Reading it back I was a bit surprised that I wrote this already in my first year of blogging. But, in line with the Karl et al paper, there were two things that I think were missing in this early piece.

First, that the data in the surface datasets are not measurements, but estimates derived from the temperature station measurements. In a way that could be concluded from the uneven spatial coverage, the convenience sampling and other measurement biases like Urban Heat Island, Time of Observation and who knows what more. This makes that the homogenized end result will just be an estimate of the actual mean temperature.

Continue reading

Leave a known bias in the data or correct the data for the bias: a false dilemma

The Karl et al paper left me with more questions than answers. They used low quality, scarce, spatially incomplete data and it were their assumptions that made the difference, yet they seem to give it such a high importance that, even when contradicted by high quality data with better spatial coverage, they seem to be sure that their conclusion is relevant?!?! Other articles didn’t seem to be bothered by it, they just focused on the conclusion and pitied the “deniers” who again had to take yet another blow. This was no different for an article at Dailykos with the fascinating title As climate denier heads explode over the loss of the “hiatus”, one simple question shuts them up about a response from Tom Peterson to an email of Antony Watts. It was amusing to read that the author of the Dailykos artice thinks that “deniers” “lost” the hiatus, while it is still clearly visible in all other datasets. Beyond the hyperbole there was some insight of a scientist who actually contributed to the papern, so I could see how one of the authors of the paper justifies coming to this conclusion with such data.

This is the part where he explains it:

So let me give you two examples from our paper. One of the new adjustments we are applying is extending the corrections to ship data, based on information derived from night marine air temperatures, up to the present (we had previously stopped in the 1940s). As we write in the article’s on-line supplement, “This correction cools the ship data a bit more in 1998-2000 than it does in the later years, which thereby adds to the warming trend. To evaluate the robustness of this correction, trends of the corrected and uncorrected ship data were compared to co-located buoy data without the offset added. As the buoy data did not include the offset the buoy data are independent of the ship data. The trend of uncorrected ship minus buoy data was -0.066°C dec-1 while the trend in corrected ship minus buoy data was -0.002°C dec-1. This close agreement in the trend of the corrected ship data indicates that these time dependent ship adjustments did indeed correct an artifact in ship data impacting the trend over this hiatus period.”

The second example I will pose as a question. We tested the difference between buoys and ships by comparing all the co-located ship and buoy data available in the entire world. The result was that buoy data averaged 0.12 degrees C colder than the ships. We also know that the number of buoys has dramatically increased over the last several decades. Adding more colder observations in recent years can’t help but add a cool bias to the raw data. What would you recommend we do about it? Leave a known bias in the data or correct the data for the bias? The resulting trend would be the same whether we added 0.12 C to all buoy data or subtracted 0.12 C from all ship data.

That second example was the question that the author of the Dailykos article alluded to (and is also the subtitle): “What would you recommend we do about it? Leave a known bias in the data or correct the data for the bias?”. At first glance, it sounds reasonable, but I think it is a false dilemma. It leaves us with the apparent choice of:

  1. leave the known bias into the equation and get a wrong result
  2. correct the bias and get a correct result.

Option one is an obvious no no. If one is sure there is a bias, there is nothing wrong with trying to adjust it (when the strength of the bias is known). So, option two seems the only real choice and following that the result doesn’t support the “pause”…

But is this the real choice we have? I think it is the wrong question altogether, knowing that the conclusion depended most on the adjustments of those sea surface temperatures.

Continue reading

Now there is a pause, now there isn’t

There was quite a controversy in the last week around the Thomas Karl et al paper: Possible artifacts of data biases in the recent global surface warming hiatus. They came to the conclusion that their results do not support the notion of a “slowdown” in the increase of global surface temperature. in other words, there is no such thing as the “pause”.

Reading this paper, what first caught my eye were the datasets used:

The data used in our long-term global temperature analysis primarily involve surface air temperature observations taken at thousands of weather observing stations over land, and for coverage across oceans, the data are sea surface temperature (SST) observations taken primarily by thousands of commercial ships and drifting surface buoys.

To represent land data they used a land surface dataset. No surprise here, if one want to go back to the 1950s or earlier, there will not be much else. But this dataset is riddled with all kinds of problems like incomplete spatial coverage, siting issues, UHI and who knows what else. The error margin would be rather large.

When it comes to the sea surface temperatures, they used the observations of commercial ships and buoys. For those who don’t know it, sea surface temperatures were measured first by hauling buckets of seawater onto a commercial ship and sticking a thermometer in it. Later via sea water flowing through engine cooling water intakes and via drifting buoys. Also no real surprise, if one want to go back to a long time, there will not be much else to go with. But this comes with its own set of problems like incomplete spatial coverage, changes in how the measurements where performed, different materials, different ships, depth of water intake depending how heavy the ship is laden and who knows what else.

The elephant in the room is that high quality datasets, like the satellite datasets, are omitted. Both of the used datasets have issues with spatial coverage. The land dataset from weather stations collect only measurements in places where humans tend to live. The sea surface measurements via commercial vessels will only get temperature measurements in shipping lanes, not even at the same spot, not at the same time. For the buoys, only those that were drifting. The big question I have is: how good do those surface temperatures represent the real average surface temperature of the earth? How good do those bucket and intake measurements represent the real global ocean surface temperatures? Why would one want to change good data to agree with the bad? To me, that stands central in the whole issue.

Continue reading

The influence of the zombie thermometers

zombie thermometer

When I saw the response of NCDC press office on the questions raised by Tony Heller (aka Steven Goddard), Paul Homewood and Antony Watts on the reliability in the NCDC temperature network, it was not exactly what I was expecting:

Our algorithm is working as designed.

As far as I could understand the issue, it has to do with how their program (that calculates the average US temperature) works when temperature data is missing. Tony Heller claimed that 40% of the data is “fabricated”, meaning not coming from measurements. When there is no measurement for a certain station, the program makes an estimate by looking at the neighboring stations and assigns an estimate of that missing data from this. So far so good, but something went horribly wrong when the program called that routine for example when the underlying raw data was complete or, more mind boggling, when there were even stations that were closed for many years and yet estimates were generated for them.

Yet they claim that their algorithm is working … as designed.

Not a bug, a feature. Nothing to see here, move along.

Although this infilling is perfectly fine (mathematically speaking that is) in a reliable network, there is an issue with it in a system with many discontinuities.

Like surface weather station data.

Only five years ago I got drawn into the global warming issue when visiting the Surface Stations website. From that moment on I realized that the temperature measurement network was not really in good shape. According the current data, more than 90% of the stations had siting issues and will report temperatures with an error larger than 1 °C. It came as a surprise that there were many, many issues like heat sucking asphalt, stones, nearby buildings, air conditioners/external heat sources and what not more. All these influence temperature readings, upwards.

So in this case, if 90+ percent of the stations really has sitings issues and there is infilling from neighboring stations, how reliable would that infilled data be???

Does NOAA/NCDC fabricates data as being insinuated? Well, depends on the definition of “fabricate”. If it is willfully alter the data for a specific goal, then I don’t think that this is the case. But if it is creating data where there was no data before, yes, I think they are fabricating data and chances are high that the adjustment will be upwards.

Fine, but just don’t call it high quality data anymore…