There was quite a controversy in the last week around the Thomas Karl et al paper: Possible artifacts of data biases in the recent global surface warming hiatus. They came to the conclusion that their results do not support the notion of a “slowdown” in the increase of global surface temperature. in other words, there is no such thing as the “pause”.
Reading this paper, what first caught my eye were the datasets used:
The data used in our long-term global temperature analysis primarily involve surface air temperature observations taken at thousands of weather observing stations over land, and for coverage across oceans, the data are sea surface temperature (SST) observations taken primarily by thousands of commercial ships and drifting surface buoys.
To represent land data they used a land surface dataset. No surprise here, if one want to go back to the 1950s or earlier, there will not be much else. But this dataset is riddled with all kinds of problems like incomplete spatial coverage, siting issues, UHI and who knows what else. The error margin would be rather large.
When it comes to the sea surface temperatures, they used the observations of commercial ships and buoys. For those who don’t know it, sea surface temperatures were measured first by hauling buckets of seawater onto a commercial ship and sticking a thermometer in it. Later via sea water flowing through engine cooling water intakes and via drifting buoys. Also no real surprise, if one want to go back to a long time, there will not be much else to go with. But this comes with its own set of problems like incomplete spatial coverage, changes in how the measurements where performed, different materials, different ships, depth of water intake depending how heavy the ship is laden and who knows what else.
The elephant in the room is that high quality datasets, like the satellite datasets, are omitted. Both of the used datasets have issues with spatial coverage. The land dataset from weather stations collect only measurements in places where humans tend to live. The sea surface measurements via commercial vessels will only get temperature measurements in shipping lanes, not even at the same spot, not at the same time. For the buoys, only those that were drifting. The big question I have is: how good do those surface temperatures represent the real average surface temperature of the earth? How good do those bucket and intake measurements represent the real global ocean surface temperatures? Why would one want to change good data to agree with the bad? To me, that stands central in the whole issue.
According to the Materials/Methods, satellite measurements were included in previous versions of their SST analysis. This means that their analysis now only rests on low quality data with incomplete coverage. With this kind of data it would be very difficult to correctly represent global temperatures or to have a consistent dataset.
This reminded me of the claim that sea level rise was in fact accelerating while current measurements don’t even show this. Those investigators did this by a “re-assessment” on basis of scarce and incomplete historical tidal gauge data, coming to the conclusion that sea level rise was lower in the past and therefor accelerating…
Here we also see a re-assessment on the basis of scarce and incomplete (historical) surface temperature data, contradicting the other observations. What is that fascination for low quality datasets? Why do they think that this re-assessment is somehow relevant?
To be honest, I don’t have a problem with a result that is different than or even contradict other results. If that is the one with qualitative better data, I couldn’t care less. But that is not what we have here. Now we have a result based on lesser quality data that contradicts the high quality data. In that case I am not really impressed.
But this is of course not how the media will see it. They will solely focus on the conclusion and don’t look at how that result is established. Yet this is crucial information to put the result in context…