The paper Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming of Shaun Lovejoy was brought as a statistical analysis that rules out natural-warming hypothesis with more than 99% certainty. That seems impressive at first, but after reading it is more hyperbole than substance. It seems riddled with quite some logical fallacies. Previous posts were about cherry picking the time frame and about assuming the conclusion.
This post will be about two datasets that seem comparable, but are not.
Lovejoy looked at two periods: before and after 1850. Before that time he concludes that natural variation is at work, after that the influence of man. But the big question: are those two comparable? I would have no problem with acknowledging that the influence of man in the 16th century would be much less than the influence now, but it would be as easy to understand that the quality of the data will not be in the same league.
This is how he explained the proxy data:
To assess the natural variability before much human interference, the new study uses “multi-proxy climate reconstructions” developed by scientists in recent years to estimate historical temperatures, as well as fluctuation-analysis techniques from nonlinear geophysics. The climate reconstructions take into account a variety of gauges found in nature, such as tree rings, ice cores, and lake sediments. And the fluctuation-analysis techniques make it possible to understand the temperature variations over wide ranges of time scales.
He seem to assume that proxy data is temperature data! As far as I know it is not. Tree ring measurements for example are the result of a mix of temperature, precipitation, diseases, nutrients, competition, pests, weather events and who know what more. Sure, temperature is a part of the equation, but there are many others. Comparing this with thermometer measurements is comparing apples with oranges. It is the same as grafting the instrumental dataset onto proxy data. It looks nice and impressive, but it is meaningless.
How could this be even close to accurate or conclusive? First, as said above, both data sets can’t be compared because they are different. The instrumental data set is direct temperature data (yet homogenized, adjusted in order to extract climate data information out of weather data). The proxy data is temperature data mixed with other data, it is not pure temperature data. That is comparing two heavily processed datasets, each with their own uncertainties. Secondly, the proxy data is even more sparse than the weather data.
Whatever conclusion is made from this comparison, it shouldn’t be very convincing.