Monthly Archives: June 2015

This will be for our children

Urgenda, the organization that sued the Dutch Government because they didn’t do enough to prevent dangerous climate change, got victorious Wednesday last week. The judge ordered the Dutch Government to cut greenhouse gas emissions compared to 1990 by 25% by 2020.

My first reaction was amazement. Wasn’t there something like the separation of powers? Yet, now judicial power is intervening with legislative power. Aren’t they a democracy? Yet, now a small activist group is bypassing the majority.

Then there was this statement made after the judgement that kept on resonating in my head:

“A courageous judge. This is fantastic,” said Sharona Ceha, another Urgenda worker. “This is for my children and grandchildren.”

In a way, I can agree with this one, but obviously not how they see it…

Let’s for a moment look at reality. Global warming/climate change is something, well, global. If you just look at the global emissions, they go up, steeply up. Three countries are responsible for more than half of the emissions and for 3/4 of the increase in global emission compared to 2012. These are China (which isn’t planning to decrease emissions until 2030), USA (which is telling that it favors emission reduction, but doesn’t ratify anything and knowing the republicans are in the majority, this could make it rather difficult) and India (which said they will not decrease emissions in the first place).

Let’s look at some numbers to get some feel for the proportions. These are the three largest emitters which consists of more than half of the emissions compared with the total emissions and emissions of the Netherlands:

Continue reading

The privileges of the anointed

There was some commotion on twitter caused by some tweets by Naomi Oreskes. It started rather innocently with an earlier, probably at that time not much noticed, tweet. But then it culminated into something I have seen several times before. But before we get to that, bear with me and let’s first have some background.

As I have said, it started rather innocently with an earlier tweet. In it she described her passion for snow to a follower who asked if she was in Idaho:

At first it seems just a little chat between two like minded spirits. But it triggered a strong reaction because it was in such stark contrast with a more recent statement where she criticized the last encyclical not seeing “extreme consumerism” as the issue:

Going “a lot” to Utah from her home (a round trip of almost 5,000 miles) would give her a huuuuuuge carbon footprint. If that isn’t “extreme consumerism” then I don’t know what is.

Whether done with “love” or not.

She apparently didn’t see it that way and in her typical style, ranting about the “deniers”, she tweeted this response:

I had to laugh at this one, because that was not the issue.

Not even close.

Continue reading

Reliable measurements or pliable estimates?

The last three posts were mostly about the adjustments of the ocean data done in the Karl 2015 paper. This because the adjustments in ocean data had the biggest impact on the result (that there wasn’t something like a “hiatus”). Kevin Marshall of the excellent blog manicbeancounter.wordpress.com reminded in a comment on previous post that surface datasets had issues as well.

I could agree with that one, I also had written a post in the first year of blogging: Things I took for granted: Global Mean Temperature,, that described how my perception of a global mean temperature changed from believer until skeptic and why I had a hard time to believe that the (surface) datasets were accurate enough to capture an 0.8 °C increase in temperature over 160 years.

Reading it back I was a bit surprised that I wrote this already in my first year of blogging. But, in line with the Karl et al paper, there were two things that I think were missing in this early piece.

First, that the data in the surface datasets are not measurements, but estimates derived from the temperature station measurements. In a way that could be concluded from the uneven spatial coverage, the convenience sampling and other measurement biases like Urban Heat Island, Time of Observation and who knows what more. This makes that the homogenized end result will just be an estimate of the actual mean temperature.

Continue reading

Amusing consequences from the busting of the “hiatus”

The conclusion of the Karl et al paper is not only puzzling, but it has some very amusing qualities as well. If it is really true that there are “possible artifacts of data biases in the recent global surface warming hiatus” then this puts some other things in an awkward light.

The paper showed a different looking temperature series in which there is no “hiatus” in warming, it goes just straight up where modern day measurements find a standstill of temperature increase of almost two decades now.

If that is really true, then obviously the other datasets must be wrong. They still show that, according to the paper, non-existing hiatus. Because the result came from their choice of adjustments for scarce data, one could conclude that the adjustment of scarce, spatially incomplete data is preferable over higher quality data with better spatial coverage… 🙂

But the most amusing part is that in the last years, no time and effort was spared trying to explain that “hiatus”. Continue reading

Leave a known bias in the data or correct the data for the bias: a false dilemma

The Karl et al paper left me with more questions than answers. They used low quality, scarce, spatially incomplete data and it were their assumptions that made the difference, yet they seem to give it such a high importance that, even when contradicted by high quality data with better spatial coverage, they seem to be sure that their conclusion is relevant?!?! Other articles didn’t seem to be bothered by it, they just focused on the conclusion and pitied the “deniers” who again had to take yet another blow. This was no different for an article at Dailykos with the fascinating title As climate denier heads explode over the loss of the “hiatus”, one simple question shuts them up about a response from Tom Peterson to an email of Antony Watts. It was amusing to read that the author of the Dailykos artice thinks that “deniers” “lost” the hiatus, while it is still clearly visible in all other datasets. Beyond the hyperbole there was some insight of a scientist who actually contributed to the papern, so I could see how one of the authors of the paper justifies coming to this conclusion with such data.

This is the part where he explains it:

So let me give you two examples from our paper. One of the new adjustments we are applying is extending the corrections to ship data, based on information derived from night marine air temperatures, up to the present (we had previously stopped in the 1940s). As we write in the article’s on-line supplement, “This correction cools the ship data a bit more in 1998-2000 than it does in the later years, which thereby adds to the warming trend. To evaluate the robustness of this correction, trends of the corrected and uncorrected ship data were compared to co-located buoy data without the offset added. As the buoy data did not include the offset the buoy data are independent of the ship data. The trend of uncorrected ship minus buoy data was -0.066°C dec-1 while the trend in corrected ship minus buoy data was -0.002°C dec-1. This close agreement in the trend of the corrected ship data indicates that these time dependent ship adjustments did indeed correct an artifact in ship data impacting the trend over this hiatus period.”

The second example I will pose as a question. We tested the difference between buoys and ships by comparing all the co-located ship and buoy data available in the entire world. The result was that buoy data averaged 0.12 degrees C colder than the ships. We also know that the number of buoys has dramatically increased over the last several decades. Adding more colder observations in recent years can’t help but add a cool bias to the raw data. What would you recommend we do about it? Leave a known bias in the data or correct the data for the bias? The resulting trend would be the same whether we added 0.12 C to all buoy data or subtracted 0.12 C from all ship data.

That second example was the question that the author of the Dailykos article alluded to (and is also the subtitle): “What would you recommend we do about it? Leave a known bias in the data or correct the data for the bias?”. At first glance, it sounds reasonable, but I think it is a false dilemma. It leaves us with the apparent choice of:

  1. leave the known bias into the equation and get a wrong result
  2. correct the bias and get a correct result.

Option one is an obvious no no. If one is sure there is a bias, there is nothing wrong with trying to adjust it (when the strength of the bias is known). So, option two seems the only real choice and following that the result doesn’t support the “pause”…

But is this the real choice we have? I think it is the wrong question altogether, knowing that the conclusion depended most on the adjustments of those sea surface temperatures.

Continue reading

Now there is a pause, now there isn’t

There was quite a controversy in the last week around the Thomas Karl et al paper: Possible artifacts of data biases in the recent global surface warming hiatus. They came to the conclusion that their results do not support the notion of a “slowdown” in the increase of global surface temperature. in other words, there is no such thing as the “pause”.

Reading this paper, what first caught my eye were the datasets used:

The data used in our long-term global temperature analysis primarily involve surface air temperature observations taken at thousands of weather observing stations over land, and for coverage across oceans, the data are sea surface temperature (SST) observations taken primarily by thousands of commercial ships and drifting surface buoys.

To represent land data they used a land surface dataset. No surprise here, if one want to go back to the 1950s or earlier, there will not be much else. But this dataset is riddled with all kinds of problems like incomplete spatial coverage, siting issues, UHI and who knows what else. The error margin would be rather large.

When it comes to the sea surface temperatures, they used the observations of commercial ships and buoys. For those who don’t know it, sea surface temperatures were measured first by hauling buckets of seawater onto a commercial ship and sticking a thermometer in it. Later via sea water flowing through engine cooling water intakes and via drifting buoys. Also no real surprise, if one want to go back to a long time, there will not be much else to go with. But this comes with its own set of problems like incomplete spatial coverage, changes in how the measurements where performed, different materials, different ships, depth of water intake depending how heavy the ship is laden and who knows what else.

The elephant in the room is that high quality datasets, like the satellite datasets, are omitted. Both of the used datasets have issues with spatial coverage. The land dataset from weather stations collect only measurements in places where humans tend to live. The sea surface measurements via commercial vessels will only get temperature measurements in shipping lanes, not even at the same spot, not at the same time. For the buoys, only those that were drifting. The big question I have is: how good do those surface temperatures represent the real average surface temperature of the earth? How good do those bucket and intake measurements represent the real global ocean surface temperatures? Why would one want to change good data to agree with the bad? To me, that stands central in the whole issue.

Continue reading

Truly, seriously, scientists can provide a distribution of possible values of climate sensitivity

The last Lewendowsky paper contained quite some statements that made my eybrows lift, just look at the last two posts. Another eyebrow moment was this statement from the seepage article:

We know from earlier work that uncertainty is no cause for inaction-on the contrary, greater scientific uncertainty should make us worry more, not less, about the potential consequences of climate change.

That seem to be a much repeated theme in his work. I saw this previously explained in Uncertainty is not your Friend and even earlier work. The idea behind this is that uncertainty means things could get worse than anticipated.

The reasoning in the “Uncertainty is not your friend”-article is explained like this (my emphasis):

Without going any further, we can already draw one conclusion from this fact: If our best guess of climate sensitivity is 3 degrees, and the uncertainty range is 2-4.5, then things could be worse than expected. We expect 3 degrees but might get 4.5-of course, we could also get as “little” as 2, but we are ignoring the vast majority of possible outcomes if we assume (or hope) that we will “only” get 2 degrees.

So clearly, uncertainty means that things could be worse than anticipated.

But the problem does not end there. There are two additional aspects of uncertainty that we need to consider.

First, we must consider the distribution of climate sensitivity estimates. We know that there is a “best” (mean) estimate, and we know that there is a range of most likely values. But it turns out that climate scientists can do better than that: they can provide a distribution of possible values of climate sensitivity which attaches a probability of occurrence to a range of possible values.

and

This final consideration concerns the effects of the magnitude of uncertainty. All other things being equal, should we be more worried by greater uncertainty or less worried? If scientists had really down-played uncertainty-as some commentators have insinuated-what would the effects be? What if uncertainty is actually greater than scientists think?

I can understand what he is saying. uncertainty does indeed mean that things could be worse than anticipated. As far as I understand the reasoning is as follows:

Continue reading