The conclusion of the Karl et al paper is not only puzzling, but it has some very amusing qualities as well. If it is really true that there are “possible artifacts of data biases in the recent global surface warming hiatus” then this puts some other things in an awkward light.
The paper showed a different looking temperature series in which there is no “hiatus” in warming, it goes just straight up where modern day measurements find a standstill of temperature increase of almost two decades now.
If that is really true, then obviously the other datasets must be wrong. They still show that, according to the paper, non-existing hiatus. Because the result came from their choice of adjustments for scarce data, one could conclude that the adjustment of scarce, spatially incomplete data is preferable over higher quality data with better spatial coverage… 🙂
But the most amusing part is that in the last years, no time and effort was spared trying to explain that “hiatus”. Many dozens of explanations were found to justify its existence, like volcanoes, pollution or heat now residing in the deep ocean in stead of on the surface. That’s is not only seepage from the skeptic theme into the established science, but a widespread delusion among climate scientists 😉 Are these explanations still correct? Or where they just to get rid of the hiatus by making it a non-issue?
By the way, the data to conclude that the heat went into the deep ocean, came from scarce, spatially incomplete data. Where did we hear that before? Why are they drawn to such low quality data, again and again? Why do they think that adjustments of such data is somehow better than actual measurements?
But hey, the “experts” said it, so it must be true 😉
The relative scarcity of data is a line that I have also been considering recently with respect to the land temperature data. There are three aspects to consider here.
1. Temperature change in different parts of the globe varies in magnitude significantly to the global average. What is more, there are significant areas that appear out of phase with global trends. See http://data.giss.nasa.gov/gistemp/maps/ – particularly for the 250km smoothing radius.
2. A necessary part of building regional or global temperature data sets is to homogenize the data. That is to rid the data of measurement biases – or at least to reduce the effects – by pairwise comparisons.
3. Since World War 2, there has been a vast increase in the quantity of weather stations, particularly outside of Europe and USA.
The more spatially dispersed the weather stations, the greater the extent to which homogenization will smooth out real temperature variations between temperature stations. The net effect is that the early twentieth century warming could be truncated to a greater extent than that in the last 40 years.
I think this may also apply to sea surface temperatures as well.
So whilst I agree that arguments against the hiatus (along with the multiple explanations for it) relies on scarce information and the ways to adjust for it, I also think that early twentieth century warming may have been reduced due a lack of data.
You are absolutely right that there are also scarcity issues with the land datasets and this will surely makes the uncertainty bigger when going back further into the past when measurements get more and more scarce. In previous posts I focused more on the ocean data because it had the biggest influence on the conclusion of the paper.
But indeed the land surface datasets have quite some issues as well, not only scarcity, but also siting issues, convience sampling and so on. It might be interesting to read an earlier post Things I took for granted: Global Mean Temperature, in which I contemplated on several of these issues and how my beliefs changed over time in this regard.