Tag Archives: Science By Press Release

The “vetting process” of the climate emergency petition

There was quite some fuss in the media about a paper claiming there is a climate emergency, supported by a list of 11,000 signatures of scientists. I didn’t had much time back then, so I just downloaded the list of signatories to look at it later.

The petition was held at the site of the Alliance of World Scientists and it links to the article World Scientists’ Warning of a Climate Emergency (where the list of signatories is downloadable). There are five authors and at the Alliance of World Scientists web page, the petition list is put right behind the authors. suggested that all those signatories are scientists on par with the authors:

Climate emergency petition: condenced message on Alliance of World Scientists website

The number of signatories is not shown anymore because there was an issue with, ahem, “invalid signatures”. That is a nice way to say that some crazy input were found in that list. In the meanwhile I also read a CBC news article in which the lead author was asked about the inclusion of a certain “Micky Mouse” as one of the signatories. This was his answer:

Continue reading

The molehill that was promoted to a mountain

molehill-mountain

I doesn’t happen often that record sea ice in the Antarctic is covered in the media. Yet that changes when at the same time this record could be minimized. A couple days ago a press release about Antarctic ice cover was announced. Its title: “Has Antarctic sea ice expansion been overestimated?“. The referenced paper “A spurious jump in the satellite record: has Antarctic sea ice expansion been overestimated?” was from Ian Eisenman et al and stated that “much” of the expansion could be due to a processing error:

New research suggests that Antarctic sea ice may not be expanding as fast as previously thought. A team of scientists say much of the increase measured for Southern Hemisphere sea ice could be due to a processing error in the satellite data.

The press release starts with defining the problem. It was the increase in the Antarctic sea ice cover in a warming world that puzzled the scientists. In the paper they also mention the inability of the models to capture the observed increase. The investigators in the paper now try to explain these contradictions by suggesting that much of the measured expansion just may be due to an error in the way the satellite data was processed. Culminating in the title that questions whether antarctic sea ice cover really is setting setting record highs.

When I first heard this, two questions crossed my mind. First, why didn’t they only find this now, after many decades of measurements? Secondly, how much of this increase is actually due to this error?

A couple days ago these questions were answered in one fell swoop by Pat Michaels and Paul Knappenberger. The first question was easily answered. According to the authors the error was found so late because the difference was not very visible in the noise. This triggered my curiousity to look into that press release (and later the paper). This is how they mentioned it in the press release:

“You’d think it would be easy to see which record has this spurious jump in December 1991, but there’s so much natural variability in the record – so much ‘noise’ from one month to the next – that it’s not readily apparent which record contains the jump. When we subtract one record from the other, though, we remove most of this noise, and the step-like change in December 1991 becomes very clear.”

If the difference was difficult to spot due to the noise, what does this want to say about the strength of that data in the first place?

Secondly, how big is that difference? It can’t be that big, otherwise that would emphasize even more the noisiness of the data. Some more explanation is needed here. What Eisenman et al found was a step change in the data after processing. The sensors are changed from time to time and needed to be calibrated. In one of those occasions it might had gone wrong and this was found after an update of the processing software (for example: Bootstrap). When the data was processed with both versions and then subtracted from each other, Eisenman found a step change in 1991, at the moment that the sensor was changed.

eisenman figure2

eisenman figure2

After 1991 the ice cover increased and stayed high since than. Therefor the conclusion that “much” of the expansion was due to this processing error, not because of an actual increasing ice cover. Now Michaels and Knappenberger estimated the difference to 200,000 km3, which is not really much comparing to the 1.3 million km3 cover. I think it will be even less. The step change is situated in the period from which the mean is calculated (from 1979 until 2008), so if the cover is lower after 1991 this will change the mean. Even if we subtract 200.000 of the current value, it still is quite an increase. Still not understood by the scientist and not projected by the models. Therefor Michaels and Knappenberger compared it with a molehill that became a mountain.

This was where Michaels and Knappenberger stopped. But after reading the press release and the paper, more questions arose than were solved. When looking at the Antarctic ice cover data, I would expect some step change in 1992 due to this processing error, but this was not the case. At the contrary, it seemed as if ice cover decreased a little in 1992. Plus, the graph was hovering around the zero line until around 2007 and then the ice cover started to increase rapidly. Look at the end year in figure 2, the so called “spurious” jump in the satellite record wasn’t even in data they presented in the press release.

Southern Hemisphere sea ice anomaly July 24, 2014

Southern Hemisphere sea ice anomaly July 24, 2014

According to Eisenman it is not clear which version is wrong. He sees two options. The first option is that Bootstrap version 1 is correct and version 2 introduced the problem after the update in 2007. This then means that the rate of Antarctic sea ice expansion has been overestimated in recent years, hence the title of the press release. The second option is that version 1 was wrong and the jump in 1991 was corrected with the update of version 2 in 2007.

The maintainer of the dataset doesn’t really agree. He claims there was indeed an error, but it was corrected in 2008 and the current version is correct:

The climate scientist who maintains the data set, Josefino Comiso of the NASA Goddard Space Flight Center in Greenbelt, Maryland, says he is confident that the current data set is correct. Comiso says that he inadvertently introduced a mistake into the record at some point after 1991, but corrected it unknowingly when he updated the file in 2008.

Comiso and other climate scientists reject the suggestion that his data set may overestimate the recent trend in Antarctic sea-ice growth – by as much as two-thirds, according to Eisenman’s analysis. Another NASA sea-ice data set, processed using the other standard algorithm, shows a growth trend similar to that in Comiso’s current data.

That put a fresh light on the case to say the least.

What does this means? We saw a press release that basically says that Antarctic sea ice expansion has been overestimated and “much” of the increase in Antarctic sea ice cover is due to processing error. Yet when we look at what was found, “much” seems not really that “much” after all. At least not the “significant error” they talked about. There is even the possibility that the error was found already and was corrected in 2008. The molehill seems to be sold to the public as a mountain. Those who don’t look at the numbers will have the impression that Antarctic ice cover is not worth much talking about, while in fact when the value of the step change is subtracted from the current values, it is still quite an increase.

What difference does it make anyway?

Whatsthedifference

In the previous two post I looked at the original hockey stick and its very last incarnation. Both hockey stick shapes seemed to be artifacts of the methods used, not of the underlying data. But, you could say, “Even if this uptick doesn’t follow from the data in those two graphs, we measure surface temperatures more than a century and the way the temperatures go is up. If the hockey stick graph doesn’t tell the story, the measurement data surely do! So what difference does it make anyway?”.
I seen this remark popping up at several discussions. At first I was puzzled by such statements, but now I think it fails to take into account what is really at issue. Let’s look into it in more detail.

There are two data sets in play here. The first is the proxy data set, which consists of proxy data like tree rings (Mann’s hockey stick) or ocean sediment core data (Marcott’s hockey stick). The second set is the instrumental record which consists of temperature measurements with thermometers.

Proxy data is NOT real temperature data. Previously I assumed it was, because I knew that for example in a good year the rings of a tree will be wider than in a colder year. Although this is definitely true, it is also true that there are other influences on tree rings like moisture, nutrition, diseases, pests, competition with other plants/trees, interactions with wildlife, weather events and who know how many other elements that are important in the health of that tree. In that sense, the width of the tree ring is not only dependent on temperature, but also on these other influences. This means the temperature signal is diluted in the proxy data and not directly comparable with real temperature data. What could be said is that the conditions for that tree were better or worse during time, not necessarily that temperatures went up or down. This proxy data will consist of a temperature signal, but it will be noisy data (the temperature signal probably is a big part of it, not necessarily a constant part).

Thermometers on the other side have a very good temperature signal. When the temperature goes up, the substance they contain (alcohol, mercury, metal) will expand. When they cool, that substance will contract. The higher the temperature, the bigger the expansion. The lower the temperature, the bigger the contraction.
After the measurements, it becomes more complicated with issues like the UHI (Urban Heat Island) effect on the measurements and the further processing of this data (do they really are representative for global or Northern Hemisphere temperatures), but that is a different story altogether.

Another issue in this comparison is the resolution. For example, the Marcott hockey stick has a resolution of more than 300 years. The instrumental record has a resolution that is much higher and could be described with a resolution of one day. Even if we bring that to a year, even 10 or 20 years, it is a much higher resolution than the proxy data set. If the instrumental record data were somehow put behind the proxy data and treated the same way as the proxy data, it would been barely 1 (one) measly point and probably not even placed high in the graph either.

As far as I know, there is no dispute that the world has being warming since 160 years. Temperatures are being measured for some time now and although we are now in a flat-lined region, generally the trend since 1850 was upwards. But that is not what these two hockey stick graphs were trying to say here. The issue they want to prove is that the last century is unusually warm compared to previous eras. According to their statements it hasn’t happened in let’s say the last 1,000 (Mann’s hockey stick) or 11,300 years (the Marcott hockey stick) and therefor it has the human fingerprint all over (because of humans emitting more and more CO2 into the atmosphere).

Let’s keep focus on what is really being said here. At issue in the hockey sticks is the uniqueness of the warming, not the fact that it warmed. We already know it warmed, but we don’t know if this didn’t happen before and the data given by these two studies are not sufficient to base that conclusion on. Even if it would have happened in the past, these methods will not be able to show this. When this uniqueness within the long time frame doesn’t follow from the data, it makes no sense to prove this with the incredibly short data set we have with the instrumental record.

Another issue that came to light with the Marcott paper: making the claim that the last 100 years are unprecedented (in the press release) and later saying the non-robustness of the last 100 years doesn’t matter because the instrumental record could well prove it (in the FAQ), is not really honest. The claim made was exactly about that non-robust data, when in reality the data of the graph was not saying much about the last 100 years, even seem to conclude that this data is useless for this current period. If the available evidence doesn’t support a claim, then one shouldn’t make that claim.

Returning from this to the initial question: what difference does the non-correctness of the last part of the hockey stick graphs make, because we know the earth has warmed the last 160 years anyway? As seen above, that is a false premise because that was not the thing that the hockey sticks wanted to prove anyway. But there is more to it than that and it was the statements about the Marcott paper that let me to notice this. The initial question diverts the attention from the strong statements that were made in the press. Just let me turn the question around: if the proxy data has to be tortured in order to get it into a hockey stick shape, how much signal of our current temperatures is there really in the proxy data set? To put it in other words: how much really is this an “independent” confirmation of our current temperatures anyway?

In the end, does it matter? For those who have read the papers, probably not. If they saw the articles in the press, they could put this into context. But it does matter for the laymen who only got to see the articles in the press and were yet again confirmed in their beliefs, without realizing that the papers themselves didn’t warrant those conclusions at all.

Warning: may contain traces of science

The Marcott Hockey stick

The Marcott Hockey stick

Almost two months ago, when this blog was just starting and the first few posts were put online, the media breathlessly reported the publication of a new paper by Marcott, Shakun, Clark and Mix. In the paper there was a graph that was quite impressive. It showed the temperatures of roughly the last 11,300 years (the Holocene). This graph showed an increase of temperatures until 10,000 years ago, then a plateau that lasted about 5,000 years, then a gradual decrease of the temperatures until the last hundred years or so, when temperatures suddenly went completely through the roof.

To be honest, I was not really impressed, I remembered well The Hockey Stick and the current paper showed something rather similar, but over a larger time frame. It was presented in the media as independent research that came to the same conclusion. There was a difference though: the Marcott paper was well documented. This would make it easier for skeptical souls to analyze the whole thing.

Some of the claims that were made in this press release (my bold):

The analysis reveals that the planet today is warmer than it’s been during 70 to 80 percent of the last 11,300 years.

What that history shows, the researchers say, is that during the last 5,000 years, the Earth on average cooled about 1.3 degrees Fahrenheit–until the last 100 years, when it warmed about 1.3 degrees F.

“What is most troubling,” Clark says, “is that this warming will be significantly greater than at any time during the past 11,300 years“.

Wow, these are heavy claims. The 11,300 years statement was repeated no less than six times throughout the press release. They surely wanted to rub this in! This was the crux of the press release. Leave this out and it would be unremarkable and rather bland press release, nothing newsworthy.

When the press release broke out I didn’t care about it too much. I had the impression that this was a statistical construct, just as the original hockey stick was. Other people with much more knowledge of statistics probably would dissect the whole thing to pieces. Nothing for me to worry about. For those who want to have more background on how this hockey stick was crafted statistically, this series at ClimateAudit has the analysis of Stephen McIntyre in all its finest details. I will not go in too much details, I only will mention a few things that struck me in passing by.

It was as expected: in the underlying data there was no uptick. Also, Shaun Marcott used the same data in his thesis and the corresponding graph showed no uptick either. What changed in the meanwhile so that with the same data an uptick would surface?

McIntyre found that the proxy data were re-dated et voilà: after the re-dating suddenly there was an uptick. So: no re-date → no uptick. Re-date → uptick. The uptick seemed dependent on the re-dating of the proxy data.

When the paper came more and more under fire, a FAQ was published on the realclimate website. It is a very interesting read. This is what it said about the uptick (my bold):

Our global paleotemperature reconstruction includes a so-called “uptick” in temperatures during the 20th-century. However, in the paper we make the point that this particular feature is of shorter duration than the inherent smoothing in our statistical averaging procedure, and that it is based on only a few available paleo-reconstructions of the type we used. Thus, the 20th century portion of our paleotemperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions.

and

Any small “upticks” or “downticks” in temperature that last less than several hundred years in our compilation of paleoclimate data are probably not robust, as stated in the paper.

and later in the answers on questions of the readers:

The most recent points are affected strongly by proxy dropout and so their exact behavior is not robust.

and:

They specifically state that this reconstruction is not going to be useful for the recent period – there are many more sources of data for that which are not used here – not least the instrumental record

That’s puzzling. In the press release we were repeatedly told that the temperatures of the last 100 years were significant higher than over the last 11,300 years and now we get to hear that:

  • If there was a warming trend in another part of the Holocene similar to that of the last hundred years, this method wouldn’t even be able to detect it.
  • The last 100 years of the graph is not robust and one cannot derive anything conclusive from it.
  • The conclusion in their paper is stated differently than that of the press release. In the paper they state that the last hundred years are not robust and therefor are not included in any of their conclusions. But they “forgot” to mention this fact in the press release and shouted out the relevance of this last hundred years with certainty. This is very misleading. It tricks readers of the press release (it will be read by much more people than the paper) into believing the paper’s conclusion is that the last hundred years are definitely warmer, when this wasn’t the conclusion at all.
  • It was stated that although the reconstruction was not useful for the recent period, other sources like the instrumental record are. But this is not the issue here. The reconstruction in the paper has an uptick, the press release refers several times to the same uptick. It is being emphasized in the media as independent confirmation of other upticks. Yet it is not robust and not useful for the period it is told it is significant for. If this is true, then this paper is no independent confirmation of the 20th century warming and it should not be presented as such. If they compared to other sources like the instrumental record in stead of the Marcott uptick, fine, but then they should have said so in the press release. They did not and even emphasized the “finding” of the paper as if it was significant.

To continue with this last point: the final date of the reconstruction is 1940. That doesn’t make much sense. If this is really the case, they found an uptick starting 100 years before 1940. But CO2 only got traction in the 1950s, so if the last proxies were (re)dated 1940 and they really found an increase, they couldn’t possibly attribute this to anthropogenic CO2. This means that they found an unprecedented huge natural temperature increase from the end of the Little Ice Age until the 1940s! That is exactly the opposite what they suggested in the press release. Which blows their statement (about CO2)…

It’s the only variable that can best explain the rapid increase in global temperatures.

…straight out the water.

Apparently they aimed for maximum shock effect. As a layman, after looking at this, I have many questions. How trustworthy is/are the author(s) of the press release? Why the huge disconnect between the conclusion of the paper and the statements of the press release? What is it that they really were trying to communicate here? Obviously not the science.