Monthly Archives: April 2013

Warning: may contain traces of science

The Marcott Hockey stick

The Marcott Hockey stick

Almost two months ago, when this blog was just starting and the first few posts were put online, the media breathlessly reported the publication of a new paper by Marcott, Shakun, Clark and Mix. In the paper there was a graph that was quite impressive. It showed the temperatures of roughly the last 11,300 years (the Holocene). This graph showed an increase of temperatures until 10,000 years ago, then a plateau that lasted about 5,000 years, then a gradual decrease of the temperatures until the last hundred years or so, when temperatures suddenly went completely through the roof.

To be honest, I was not really impressed, I remembered well The Hockey Stick and the current paper showed something rather similar, but over a larger time frame. It was presented in the media as independent research that came to the same conclusion. There was a difference though: the Marcott paper was well documented. This would make it easier for skeptical souls to analyze the whole thing.

Some of the claims that were made in this press release (my bold):

The analysis reveals that the planet today is warmer than it’s been during 70 to 80 percent of the last 11,300 years.

What that history shows, the researchers say, is that during the last 5,000 years, the Earth on average cooled about 1.3 degrees Fahrenheit–until the last 100 years, when it warmed about 1.3 degrees F.

“What is most troubling,” Clark says, “is that this warming will be significantly greater than at any time during the past 11,300 years“.

Wow, these are heavy claims. The 11,300 years statement was repeated no less than six times throughout the press release. They surely wanted to rub this in! This was the crux of the press release. Leave this out and it would be unremarkable and rather bland press release, nothing newsworthy.

When the press release broke out I didn’t care about it too much. I had the impression that this was a statistical construct, just as the original hockey stick was. Other people with much more knowledge of statistics probably would dissect the whole thing to pieces. Nothing for me to worry about. For those who want to have more background on how this hockey stick was crafted statistically, this series at ClimateAudit has the analysis of Stephen McIntyre in all its finest details. I will not go in too much details, I only will mention a few things that struck me in passing by.

It was as expected: in the underlying data there was no uptick. Also, Shaun Marcott used the same data in his thesis and the corresponding graph showed no uptick either. What changed in the meanwhile so that with the same data an uptick would surface?

McIntyre found that the proxy data were re-dated et voilà: after the re-dating suddenly there was an uptick. So: no re-date → no uptick. Re-date → uptick. The uptick seemed dependent on the re-dating of the proxy data.

When the paper came more and more under fire, a FAQ was published on the realclimate website. It is a very interesting read. This is what it said about the uptick (my bold):

Our global paleotemperature reconstruction includes a so-called “uptick” in temperatures during the 20th-century. However, in the paper we make the point that this particular feature is of shorter duration than the inherent smoothing in our statistical averaging procedure, and that it is based on only a few available paleo-reconstructions of the type we used. Thus, the 20th century portion of our paleotemperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions.

and

Any small “upticks” or “downticks” in temperature that last less than several hundred years in our compilation of paleoclimate data are probably not robust, as stated in the paper.

and later in the answers on questions of the readers:

The most recent points are affected strongly by proxy dropout and so their exact behavior is not robust.

and:

They specifically state that this reconstruction is not going to be useful for the recent period – there are many more sources of data for that which are not used here – not least the instrumental record

That’s puzzling. In the press release we were repeatedly told that the temperatures of the last 100 years were significant higher than over the last 11,300 years and now we get to hear that:

  • If there was a warming trend in another part of the Holocene similar to that of the last hundred years, this method wouldn’t even be able to detect it.
  • The last 100 years of the graph is not robust and one cannot derive anything conclusive from it.
  • The conclusion in their paper is stated differently than that of the press release. In the paper they state that the last hundred years are not robust and therefor are not included in any of their conclusions. But they “forgot” to mention this fact in the press release and shouted out the relevance of this last hundred years with certainty. This is very misleading. It tricks readers of the press release (it will be read by much more people than the paper) into believing the paper’s conclusion is that the last hundred years are definitely warmer, when this wasn’t the conclusion at all.
  • It was stated that although the reconstruction was not useful for the recent period, other sources like the instrumental record are. But this is not the issue here. The reconstruction in the paper has an uptick, the press release refers several times to the same uptick. It is being emphasized in the media as independent confirmation of other upticks. Yet it is not robust and not useful for the period it is told it is significant for. If this is true, then this paper is no independent confirmation of the 20th century warming and it should not be presented as such. If they compared to other sources like the instrumental record in stead of the Marcott uptick, fine, but then they should have said so in the press release. They did not and even emphasized the “finding” of the paper as if it was significant.

To continue with this last point: the final date of the reconstruction is 1940. That doesn’t make much sense. If this is really the case, they found an uptick starting 100 years before 1940. But CO2 only got traction in the 1950s, so if the last proxies were (re)dated 1940 and they really found an increase, they couldn’t possibly attribute this to anthropogenic CO2. This means that they found an unprecedented huge natural temperature increase from the end of the Little Ice Age until the 1940s! That is exactly the opposite what they suggested in the press release. Which blows their statement (about CO2)…

It’s the only variable that can best explain the rapid increase in global temperatures.

…straight out the water.

Apparently they aimed for maximum shock effect. As a layman, after looking at this, I have many questions. How trustworthy is/are the author(s) of the press release? Why the huge disconnect between the conclusion of the paper and the statements of the press release? What is it that they really were trying to communicate here? Obviously not the science.

Advertisement

The making of The Hockey Stick

MBH98

The Hockey Stick (MBH98)

My story from believer to skeptic – part 3
You might see Part 1 or
Part 2 first if you haven’t already.

In Part 2 I explained I started to realize that the global warming alarmism was just a gross exaggeration, not always backed by the observations. But the alarmist side still had a convincing argument. The last hurdle to take was the temperature chart of the last 1000 year. I came to know the name of this chart: The Hockey Stick. It was made by Michael Mann, Raymond Bradley and Malcolm Hughes in 1998 (MBH98) and was extended in 1999 (MBH99). It represented temperatures of the Northern Hemisphere over the last 1000 years. It showed a slowly rising of the temperature (the shaft) until the last -let’s say 100 years- when it went completely through the roof (the blade).

This chart was been found everywhere I looked for proof of human fingerprint of the warming due to CO2. I still was really naive at that time and I believed this graph was correct because it popped up about everywhere. How could this be incorrect after so many people eyeballing it? I compared it to the system of Open Software in which bugs gets removed by maximum exposure to many people that get access to it. The more people look at it, the quicker the bugs can be found and eliminated. So my thoughts were, the more that Hockey Stick figure gets reviewed, if there were errors in it, the quicker they would be found.

It was confusing. If this graph really was true, then my new understanding about climate didn’t even matter, then there was a very good reason for sounding the alarm. It was clear for all to see. In the time frame in which man was developing rapidly, temperatures went through the roof. It couldn’t be more clear.

In the meanwhile it became January 2009, about three months after I started my quest. At that time I became a regular visitor of a site called ClimateAudit. It is run by a Canadian mathematician Stephen McIntyre. I didn’t realize it, but he was an important factor in the investigation of the hockey stick graph. Together with Ross McKitrick (a Canadian environmental economist of the University of Guelph) he investigated the graph and they found things didn’t add up.

The Wegman report

One day, when I visited the blog, I checked the links bar on the left in stead of directly start reading the articles. There I stumbled on a link to the Wegman Report (left link bar, under “Links”). From previous searches I remembered vaguely that it had something to do with the Hockey Stick. I downloaded the report and started to read. I was baffled. The more I read it, the more I realized the Hockey Stick probably was based on faulty assumptions, bad statistics and probably a too close related group of scientists. This was the final drop in a bucket almost full.

To come back to the report. Edward Wegman is a statistician (Center For Computational Statistics, George Mason University) and he made this report for U.S. Congress in 2006. Some issues of the many parts that shook me (my bold):

The controversy of Mann’s methods lies in that the proxies are centered on the mean of the period 1902-1995, rather than on the whole time period. This mean is, thus, actually decentered low, which will cause it to exhibit a larger variance giving it preference for being selected, as the first principal component. The net effect of this decentering using the proxy data in MBH98 and MBH99 is to produce a “hockey stick” shape. Centering the mean is a critical factor in using the principal component methodology properly. It is not clear that Mann and associates realized the error in their methodology at the time of publication. Because of the lack of full documentation of their data and computer code, we have not been able to reproduce their research. We did, however, successfully recapture similar results to those of MM [McIntyre & McKitrick]. This recreation supports the critique of the MBH98 methods, as the offset of the mean value creates an artificially large deviation from the desired mean value of zero.

After MBH99, Stephen McIntyre and Ross McKitrick [MM03] published their critique of the 1998 paper, citing calculation errors, unjustified truncation or extrapolation of source data, obsolete data, geographical location errors and incorrect calculation of principal components. They also claimed that using the MBH98 methodology and the Northern Hemisphere average temperature index for the period 1400-1980 shows that temperatures in the 15th century exceeded those of the late 20th century. In particular, they claim that MBH98’s incorrect usage of PCA alone resulted in the well-known “hockey stick” shape.

Wow, this was heavy! The hockey stick didn’t seem to be withstand scrutiny well. As far as I could understand it, the uptick at the end was not derived from the data itself, but was formed from an incorrect usage of methodology, combined with the use of a proxy that doesn’t seem to be a good proxy for temperature in the first place. Stephen McIntyre and Ross McKitrick went even further and tested the methodology itself. They took red noise (a kind of random data) and combined it with the Bristlecone data. They found that this created hockey sticks in 99% of the cases. Oops.

More info

This story of the investigation of the hockey stick is broader than the Wegman report. But this report was the trigger for looking into the issue in more detail. The best summery of this story is written by Marcel Crok in NatuurWetenschappen & Techniek: a very clear and well researched NWT article in pdf format. Best to look at this pdf before continue, in order to be able to situate the impact it had on me.

Even more background information (on this page is a link to a pdf file with the background information).
Finally, the Hockey Stick Project page with the complete time line and responses to criticism on their investigation.

The impact

Why this report (and the papers of McIntyre and McKitrick on the Hockey Stick) had such an impact on me?

  • It showed that my assumption that the graph was properly eyeballed didn’t hold. It took a lot of time and effort of at least one person (and probably three persons or more) to get hold of the methods and data to be able to review it. It took me by surprise that the information that proved anthropogenic global warming was not organized or documented. This didn’t make much sense to me. How could that very important data not being available for all to investigate? Is this how scientists behave when they have crucial data that is important for explaining “the most important problem of human kind”!?!? The eyeballing I presumed was only the peer review that was apparently done within a small group of closely related people. We should expect more scrutiny of data that affect policy decisions.
  • The graph depended heavily on statistics, but when statisticians reviewed this paper, they found many errors in it, not detected by the (much celebrated) peer-review process. This got me thinking about what this peer review was all about. In this case it definitely wasn’t the gold standard it was told to be.
  • It was the first time that I noticed the political side behind climate science (the graph was prominent in the 2001 IPCC report).

The final result is that I am not much impressed anymore with someone showing a hockey stick result as proof of global warming and there have been many since.

Ending

And hey, we know the temperatures in the 20th century went up. What’s the big deal? Sure, temperatures went up in last century, but that was not the claim. The claim was that the temperatures of the 1990s were the highest of the last 1000 years, but this doesn’t follow from the data.

I couldn’t see this in the past, because I didn’t look at the data and just believed what others told me (others who obvious didn’t look at the data either). This is a viscous circle. People read that CAGW is true, don’t check the data and pass the message on. In this way it looks as if this is an universal truth, because the same message is mentioned everywhere.

Stephen McIntyre and his blog certainly had a huge influence on me. What influenced me most was the expression to check things oneself and in the end this is what I began to do. That was one of the reasons why I started this blog and adopted this blog name.

Too much green power

Too much power

We had a holiday yesterday in Belgium. The weather was nice: sunny, although somewhat cold, and windy. Good weather for a long walk.
In those first Spring days, windmills and solar panels were producing a lot of energy while consumption of electricity was very low because of the holiday, leading to an enormous overproduction of electricity. The grid operator had to export electricity to France and had to power down some plants to prevent overloading the grid.

This reminds me of a similar situation before. Almost one year ago (the end of May last year) we encountered exactly the same problem. We also had a sunny and windy extended weekend which brought our power grid almost to its knees. Last year it was probably worse than it was now and back then a collapse of the grid was only just averted by exporting some excess electricity to France.

But, in the end, is more energy not a good thing? Doesn’t this mean that wind and solar can produce a lot of energy in our country? I think there are more things to consider. Let me explain.

Wind and solar are intermittent energy sources. Windmills only generate electricity when there is enough wind, but not too much. Between these two boundaries, the more wind, the more energy.
The same with solar. At night, nothing will be produced. In the morning, the evening and on very cloudy days only little. When there is a lot of sun, a lot of energy is produced.
This seems very simple and straight forward, but it is important to realize this.

On the other hand we have the consumption of electricity. There is a specific consumption pattern throughout the day. In the morning, consumption will increase and a small peak will form. There is a second (larger) peak in the early evening. After this evening peak, consumption will decrease rapidly and during the night there isn’t much consumption. During the weekend, these peaks are much lower.
Electricity has the downside that it is difficult to store. The power grid operator will have to balance the production according to the consumption. Too little production or too much consumption and we will risk a brownout (not sufficient power) of a blackout (no power at all). Too much production or too little consumption and there will be a waste of electricity (it is difficult to store) or it can overload the grid (with a possible blackout).

The different (dispatchable) energy sources will have different ability to increase or decrease output on demand. For example nuclear and coal don’t have much flexibility, they are very slow in powering up or down. They are mostly used as base load (the minimum level of demand on an electrical grid over a span of time). Gas is more responsive and it can be used to increase or decrease according to demand (depending on the type, more or less rapidly).

Wind and solar are a different breed. They are intermittent energy sources. They produce energy that is not necessarily in accordance with the consumption pattern of that moment. The more intermittent energy sources in the power grid, the more challenging it will be to balance the output against consumption.

So what happened yesterday in Belgium? At first glance it seems a bit counter intuitive. Wind and solar in Belgium are only a few percent of the output. How can that small amount overload our power grid?
There are several reasons for this and they all came together yesterday:

  • It was very sunny and windy. Therefor wind and solar, which normally produce suboptimal, suddenly produces (more) optimal in a very short time frame.
  • Less power will be consumed during a holiday than during a working day. In both cases we had an extended weekend (the weekend plus the Monday that followed was a holiday).
  • We do have a peak shaver at Coo that can pump up water and release it later to produce electricity when more power is needed. But this system only has a very limited capacity of a few hours.
  • Belgium has older conventional power plants that are not that flexible in powering up or down. They kept producing energy while wind and solar also performed very well.

The power grid operator had to balance this overproduction. France was willing to absorb our surplus power… saved …
Last year it was the same story. Too much production in a too short time frame. The Netherlands and Germany were contacted, but they apparently had the same problem. At the last minute, France agreed to import some surplus power, again saving our grid in the meanwhile.

And yes, Belgium paid to be able to export that energy (there was no demand for it). But now comes the sad part. Belgium imports a lot of energy from abroad. Shouldn’t we have more than enough wind and solar to keep up for the demand this time? Well, no. Although there was an overproduction during some parts of the day, in the morning and evening there wasn’t much wind or sun. Apparently, that day we paid both ways. We paid to import electricity because we didn’t have enough of it at the beginning and the end and at peak output we paid to export some of it because there was too much production and needed to protect our grid.

How do you call this … free wind … free solar … or something? At this moment, wind and solar are minor energy sources in our country. We escaped fairly unharmed this time. But what if we increase our renewables to let’s say 20% (the goal imposed by Europe)? If we don’t drastically change our grid by then, I don’t dare to think about what could happen.