Monthly Archives: May 2013

Trust me, I am a doctor…

climatedoctor

The consensus argument is about maintaining the perception of authority of climate science. Unsurprisingly, often the comparison is made between a climate scientist and a medical doctor as a symbol of this authority. It goes like this: “Do you go to a dentist when you have a heart condition?”. This to indicate that we should agree when scientists say that “climate change is real” and “human caused”.

For the record, I have no interest in going to a dentist with a heart condition or to a nutritionist for a surgery. They obviously have their own specialization and skills. But analogies are great as far as they go, but only go so far. Analogies can be a great tool for explaining something new or something difficult to grasp. But it goes only as far as the similarities go. It seemed a fun project to find out how far I could stretch the analogy.

There are of course similarities. The human body is a complex organism, just as climate is a complex system. Both contain different processes that also interact with each other. They both are the object of study by scientists.

But there are also important differences. Medical science is several centuries old (there were medical schools around the first millennium). It wasn’t like the medical science of today of course. Back then they probably healed with herbs, ointments and blood letting. Medical science came a long way since then. The human body is studied very detailed these days, even the human genome is known to the last sequence. Lots of diseases have been studied and for most of them a successful cure was found or can be controlled. This has been done over many centuries and many, many patients. Experience about diagnosing and curing diseases is very large.

The same can not be said of climate science. A lot is known about climate, but climate science is a relatively young science. There is only reliable data since 30 – 50 years or in some cases even since the last decade. Before that the data is really sparse and/or in a not standardized format. They were used to gather for example weather data in a specific place, not climate data globally. Climate is weather over a longer time frame. It probably takes about 60 years for one cycle. So there isn’t even reliable data for one measly cycle. And how much experience do we have in diagnosing the effects of current CO2 levels?

There seem to be a lot of disciplines involved in climate science that all study a different subset of climate. Which expert do you believe when it comes to climate? Do you believe a paleontologist, a mathematician, a chemist, an ecologist, an economist,…? What if a paleontologist is talking about the meteorology or an ecologist about the economy?

In medical science, patients can be studied by comparing with healthy individuals. This is not exactly true in climate science. We don’t have a spare planet exactly as ours to investigate the differences that will occur when we pump in or extract CO2 out of the atmosphere. We seem to have only one patient and we don’t really know if he is sick and if so, how sick and how to cure. The data being used to diagnose seem to be highly processed and adapted data (look at how the the GISS data set morphed over time, returning it unrecognizable in the end) or computer models (that try to model an intrinsic chaotic system).

But last but not least, I will only go to a cardiologist that I trust. I will not go to someone who is known for making wrong diagnoses time after time or exaggerate their diagnoses for whatever reason.

To me it is clear the doctor analogy doesn’t hold much water. The two obviously have similarities, but in the end are too different to really compare.

Let me just say that I will trust some scientists, not all. Just as I will trust some doctors, not all. Depending on their specific skills, how trustworthy/successful they were in the past, how they were willing to share their data, how well they predicted things,… Trust is not a given, it is something that is earned. Whether one is a doctor or a climate scientist, both alike.

Advertisement

Why the need to declare a consensus?

When learning about the methodology of the Cook Survey, one question kept on popping up: why the need to declare a consensus in a scientific paper?

It doesn’t seem to make much sense scientifically. Science is not advanced by consensus. There is no showing hands or headcount. Also, logically it doesn’t matter how many scientists have the same opinion. Even if 100% of the scientists have the same opinion about something, that doesn’t make it more true. This made me think about what Einstein once said when he heard that 200 scientists signed a letter stating his findings were considered faulty. He wasn’t really impressed by the sheer number: “It takes only one to prove me wrong”, he said. Consensus is not a proof that something is right.

If the science is so clear and the evidence so overwhelming, why not just show that unequivocal evidence to the whole world? Why not debate the other side? If they have no valid arguments, that would be clear rather quickly. Why even bother to make a survey to prove a particular case? Just let the arguments prevail. If you have proof, why even need a consensus?

The more I think of it, the more I am surprised there aren’t more scientists disagreeing with each other. The climate is a very complex research topic and climate science is a relatively young science which only recently got reliable data. Just a few random thoughts about why in my opinion it would be hard to come to a consensus in climate science:

  • Many different factors contribute to the outcome.
    Probably not all of them are known. There is still much debate about some of them.
  • Those elements are also interconnected.
    Change one and others will also change. CO2 can raise the temperature. Higher temperatures means more evaporation (which takes energy), but also means more water vapor in the air (which also is a greenhouse gas), but this also means more clouds (which can decrease the incoming radiation of the sun and decrease temperature), but also … etc, etc. Change one element and others will change too to come to a new equilibrium.
  • We only have reliable data since let’s say the late 1970s (air temperatures, sea ice, thunderstorms) or even the beginning of the millennium (sea surface temperatures).
    That is not even one complete cycle! Even now we depend on data from which it was never the purpose to monitor climate and have to be interpreted to get a result. It never ceases to amaze me that climate experts prefer the sparse and highly adjusted land surface data set over satellite measurements.
  • Most of the doom and gloom scenario’s come from mathematical models.
    Although I am very computer minded and have some programming skills, I fail to understand why one could get accurate results with limited data. In computer science lessons we learned about the gigo-principle (Garbage In → Garbage Out). Climate experts keep telling us that CO2 is the main driver behind the warming since 1850. I can vividly imagine something when models are trained with a high CO2 sensitivity and then unsurprisingly show a high temperature increase. Train a model that CO2 is the main driver and it will get a huge response to more CO2. That shouldn’t be a surprise to anyone.
  • The system is ever changing, with or without us.
    Change is the norm. When I was a believer I was afraid because of the belief that change meant loosing things. While this is perfectly possible, change is not necessarily a bad thing, although it can be. It can cut on both sides. A warming climate can mean we loose something, but it can be beneficial as well. There are many possible effects on both side of the issue. Only focus on the (possible) bad things is a very unsatisfying and depressing way to live. Been there, done that.

In such a complex system, wouldn’t it be more logical that there should be more scientists that disagree with each other? Claiming a consensus in a highly complex system with sparse data seems rather artificially.

Scientifically it doesn’t make much sense, but there definitely are advantages in declaring a consensus. It has important psychological, social and political impacts.

If there is consensus, there is no real need to debate anyone on the science. As Katherine Hayhoe typified this rather clear via Twitter:

I don’t debate unless there’s equal representation (49 pro-climate chg scientists vs 1 against)

Under these conditions the chance of debating anyone with a different view will be minute to non existent. That is a real comfortable position one can be in. But even if this is an extreme example, in the real world there is not much of a debate going on. There are eminent scientists on both side of the debate. It would be refreshing to hear the arguments from the two sides and to take them on their own merits, not whether they come from the “consensus” or the “non-consensus” side of the issue.

Another advantage by declaring a consensus: there is safety in numbers. As a layman, consensus is reassuring. It relieves us from digging in the matter and try to understand it to form our own opinion. It also would make us more susceptible to (severe) measures decided by the politicians. Here we come back to the conclusion of the Cook survey:

The public perception of a scientific consensus on AGW is a necessary element in public support for climate policy.

Translate this to the real world: unless the public has the perception that there is a consensus among scientists that anthropogenic causes are the main driver of global warming, governments will not likely do something. Was it the purpose of the paper to declare a consensus to facilitate climate policies?

The consensus gap

SchopenhauerQuote

Do you remember from your youth the game in which someone whispers a word to another and in the same way this word is passed to yet others in a line. Then in the end you find the word morphed into something different? The same seems to be happening with the Cook Survey. It wasn’t that unexpected though.

Look how the results of the survey morphed into something completely different.

First, the following tweet of president Obama:

Ninety-seven percent of scientists agree:
#climate change is real, man-made
and dangerous. Read more: http://OFA.BO/gJsdFp

The link in this tweet goes back to Reuters, stating:

Ninety-seven percent of scientists say global warming is mainly man-made but a wide public belief that experts are divided is making it harder to gain support for policies to curb climate change, an international study showed on Thursday.

The two statements are a strange twist of the conclusion of the survey.

First the claim that “climate change is real, man-made and dangerous”. The conclusion suddenly went from “97% of the abstracts from which a position was found in the title or abstract, was assumed to endorse anthropogenic global warming” to “Climate change is real, man-made and dangerous“. With all due respect, but there is no way this conclusion can follow from this survey! But nevertheless, this is how the results are being reported to the outside world.

Then let’s look at the Reuters statement: “Ninety-seven percent of the scientists say”. This is not what 97% scientists “say” at all. Remember, this is what the survey actually did:

  • A group of volunteers of the blog Scepticalscience compiled about 12,000 papers published between 1991 and 2011 that contained the words “Global Warming” and “Global Climate Change”
  • They limited the list by those with an abstract of less than 1,000 characters
  • Of this selection they viewed the title and the abstract and rated those in 7 categories
  • They could only find a position on anthropogenic global warming in 33.6% of the abstracts
  • From those 33.6% there were 97.1% endorsement and only a tiny fraction (1.9%) rejected or was uncertain (1.0%).

Therefor they claim an “overwhelming” consensus…

They tried to pinpoint how much agreement they could find via the title and the abstract of a paper. This leaves me with the question: how many papers that were assumed to endorse anthropogenic global warming really investigated the issue? And additionally: how many of them found it “dangerous”?

Back to the paper. Let’s just look at the conclusion. This is how it starts (my bold):

The public perception of a scientific consensus on AGW is a necessary element in public support for climate policy (Ding et al 2011). However, there is a significant gap between public perception and reality, with 57% of the US public either disagreeing or unaware that scientists overwhelmingly agree that the earth is warming due to human activity (Pew 2012).

Followed by a full paragraph about strategies in media treatment.

Sound odd in a publication that is titled: “Quantifying the consensus on anthropogenic global warming in the scientific literature”. Nothing about public perception in the abstract either and this “gap” was what the Reuters article was all about. The fact that is in the conclusion and even the largest part of it (2 paragraphs out of 3) is a dead giveaway that this is climate policy in action. Maybe, just maybe this is exactly what they wanted to communicate after all?

What’s in a number?

400

For those who didn’t heard the news yet, since shortly CO2 levels in Mauna Loa starting to reach 400 ppm. It was hailed as an important number, some even talked about a tipping point. So I thought it would be an interesting project to look into this a bit more in detail and find how much is too much when CO2 is concerned.

First things first, the 400 ppm level. I read about it in a paper last Tuesday. I could trace back the origin to a media alert by Christina Figueres, executive Secretary of UNFCCC.

“With 400 ppm CO2 in the atmosphere, we have crossed an historic threshold and entered a new danger zone. The world must wake up and take note of what this means for human security, human welfare and economic development. In the face of clear and present danger, we need a policy response which truly rises to the challenge. We still have a chance to stave off the worst effects of climate change, but this will require a greatly stepped-up response across all three central pillars of action: action by the international community, by government at all levels, and by business and finance.”

“Historic threshold”, “new danger zone”, “in the face of clear and present danger”! No hyperboles were spared in this one. But hey, we lucky people, we still have a chance to evade this bullet!

The big question we started with: how much is too much? I remember that this is not the first time the media reported major tipping points. I remembered values of 350 ppm, 650 ppm and the pre-industrial level. Picture my surprise when I hit Google with some search terms and found loads of links to statements which involves numerous CO2 tipping points. Next you will find one or two examples of each level. There are loads more to be found on the internet.

Let’s start with a very known level. There is of course the 350 org organization, which states 350 ppm is the upper limit and we should return to that to be on the safe side.

350 means climate safety. To preserve our planet, scientists tell us we must reduce the amount of CO2 in the atmosphere from its current level of 392 parts per million to below 350 ppm. But 350 is more than a number-it’s a symbol of where we need to head as a planet.

We passed that for already 2 decades now. Seems less dangerous than originally thought. But there are even lower tipping points predicted. Some scientists like for example James Hansen states that 300 ppm is the real safe limit. Read more about this on the
300.org google site and some contemplation about whether 300 ppm or 350 ppm is a safe limit.:

2. Urgent reduction of atmospheric CO2 to a safe level of about 300 ppm as recommended by leading climate and biological scientists.

These are probably levels from almost 70 years ago! Even that seems too high. What about 280 ppm? Earth was, ahum, “in balance” with that, so it would be straight forward to aim for that level. According to State of Nature

If burning fossil fuels like coal and oil during industrialization has created the mess we’re in with climate change, it seems only logical that we should aim for pre-industrial levels of atmospheric CO2 of 280 ppm.

or the book Climate Challenge: 101 Solutions to Global Warming by Guy Daunce p31:

We need a target that guarantees the safety of our planet – in other words 280 ppm.

That’s more than 150 years ago we had that level. Apparently mayhem takes its time. And even that is much too high. Professor Schellnhuber says that we need to be less than 280 ppm!

Schellnhuber states that we need to go to pre-industrial levels of CO2 emissions (less than 280ppm) to save the planet

How much lower?

global emissions to be reduced to 220-225ppm.

Oops, earth must have experienced loads of dangerous tipping points during the last 800,000 years! Who knew?

Not much chance for tipping points higher than currently stated 400 ppm, you think.

You would be wrong.

Giss states levels above 450 ppm would be dangerous:

According to study co-author Makiko Sato of Columbia’s Earth Institute, “the temperature limit implies that CO2 exceeding 450 ppm is almost surely dangerous, and the ceiling may be even lower.

Step it up a notch: 500 ppm.
The book Understanding Environmental Pollution by Hill (p406):

500 ppm is considered a “tipping point” beyond which humanity must not allow itself to go before irrevocable changes could take place in our climate.

What about 550 ppm:
According to Avoiding dangerous climate change-page at Wikipedia

“Avoiding Dangerous Climate Change: A Scientific Symposium on Stabilisation of Greenhouse Gases” was a 2005 international conference[16] that examined the link between atmospheric greenhouse gas concentration, and the 2 °C (3.6 °F) ceiling on global warming thought necessary to avoid the most serious effects of global warming. Previously this had generally been accepted as being 550 ppm

560 is also a real nice number (twice the pre-industrial level of 280). On the climate avenue website:

It is calculated that if the carbon dioxide concentration reaches 560 ppm, the world will be in great danger.

Yikes! It is even calculated! Are there higher bidders?

Yes, there are. 650 ppm:
Golden Rules Report (pdf)

The Golden Rules Case puts CO 2 emissions on a long-term trajectory consistent with
stabilizing the atmospheric concentration of greenhouse-gas emissions at around
650 parts per million, a trajectory consistent with a probable temperature rise of more
than 3.5 degrees Celsius (°C) in the long term, well above the widely accepted 2°C target.

But it doesn’t stop there: what about tripling or quadrupling the pre-industrial level (840 – 1120 ppm)? NY Times reports:

Some experts think the level of the heat-trapping gas could triple or even quadruple before emissions are reined in.
[…]
Even if climate sensitivity turns out to be on the low end of the range, total emissions may wind up being so excessive as to drive the earth toward dangerous temperature increases

Here I stopped my search. This was a dizzying experience. So many declared tipping points and I even didn’t searched for increments of 25 yet. From 220 to 1120, that is quite a difference! Some advice for those who want to declare a new tipping point: 50 at the end is nice, but increments of 100 or increments of 280 are even better.

Why so many declared tipping points? Let’s go back to the media alert of the UNPCCC:

Governments will be meeting 3 – 14 June in Bonn for the next round of climate change talks
under the umbrella of the UNFCCC. A central focus of the talks will be negotiations to build a new
global climate agreement and to drive greater immediate climate action.

So, why 400 ppm? Well, if it is there, why not just using it? The 400 ppm is a political move, those meeting Governments will get that rubbed in to steer the negotiations into a certain direction. The other declared tipping points probably also had their specific purpose. They are probably more symbolic in nature than scientific.

Speaking of numbers with a symbolic meaning, this is the 13th post on this blog. Should I start getting nervous? 😉

Can one count a consensus?

piechart97

One of the most repeated claims about global warming is that there is a consensus between scientists that man is causing the current warming. Some papers tried to prove this claim. The last one was a survey by John Cook et al. I became interested in this survey when it came online and several blogs where discussing the method. It seemed odd to me to prove a consensus of scientists by making a survey to evaluate titles and abstracts of papers. I was looking forward to the moment the paper would be published. That day is today.

First, let’s look at the (predictable) findings of this the paper.

We analyze the evolution of the scientific consensus on anthropogenic global warming (AGW) in the peer-reviewed scientific literature, examining 11 944 climate abstracts from 1991-2011 matching the topics ‘global climate change’ or ‘global warming’. We find that 66.4% of abstracts expressed no position on AGW, 32.6% endorsed AGW, 0.7% rejected AGW and 0.3% were uncertain about the cause of global warming. Among abstracts expressing a position on AGW, 97.1% endorsed the consensus position that humans are causing global warming.

Wow. How did they come to this 97% to begin with? Well, simple.

First they removed the number of abstracts with no position on AGW. Then they calculated the percent from the total of the rest. Et voilà: a majority where there wasn’t one before.

Endorsement
level
Abstracts Original
percent
Final
percent
No position 7,930 66.4 0
Endorsing 3,896 32.6 97.1
Rejecting 78 0.7 1.9
Uncertain 40 0.3 1.0
Total 4,014 100 100

From their full selection of abstracts there wasn’t a consensus, not even a majority unless one ignore most of the data.

Some random thoughts…

There is a strange twist of definitions in the paper

In fact, they looked at abstracts of scientific papers and evaluated if they endorsed AGW or not. Most abstracts were very short anyway (they selected on abstracts with less than 1000 words) and it was difficult to conclude if there was an endorsement or not.

Abstracts expressing “no position” are not only abstracts that do not endorse or reject AGW, but more probably abstracts “from which the position of AGW could not be determined from the title and the abstract”. Calling it “no position” implies it was clear that the paper didn’t make an endorsement or rejection. Most possible it meant “Not possible to determine”: it is not possible to conclude what the paper meant from the title and abstract.

The same with endorsement. This doesn’t mean the paper endorses AGW, it does mean that the evaluator concludes from the title and abstract that the paper endorses AGW. And also the same with rejection. It doesn’t mean the paper rejects AGW, but that the evaluator concludes from the title and abstract that the paper rejects AGW.

Science doesn’t work by consensus

Politics do. Consensus is a political thing and is used in the decision making process. Science isn’t run by raising hands, thank goodness! I can imagine that many scientist could lean over to the same theory. But in a complex issue as the climate it would be doubtful to get most, let alone all, scientists on the same line.

Consensus is not majority

  • Consensus: a decision that can be supported by all the participants, even those that would prefer to support another decision. Consensus is not reached if even one member of the group is unwilling to proceed with a decision that he or she cannot support.
  • Majority: the largest group, at least 50%.

In this case, they were not trying to find a consensus, but a majority. Not a majority of all abstracts, but a majority of a subset of all abstracts and only from those from which a position could be determined by the evaluator.

In the end, is there is a consensus among scientists that man is largely responsible for the recent warming? I really doubt it and to my humble opinion this paper will not be able to prove it.

Oceans on acid

Last Monday I have read an article in a news paper that caught my eye. It was somewhat hidden in a corner at the last page and was about the acidification of the Northern Sea. When checking I found that the article probably originated from Reuters. It was all about a report by 60 experts for the Arctic Monitoring and Assessment Programme, commissioned by the eight nations with Arctic territories.

There is a lot to be said about this short article, but so little time. I will focus on the alarmist message and how it is communicated.

To begin with, a strong statement from that article (my bold)

The report said the average acidity of surface ocean waters worldwide was now about 30 percent higher than at the start of the Industrial Revolution

I heard that claim many times before, so I knew what it was about. A “30% increase” seems a lot, but take in mind that the pH scale (that we commonly use to determine acidity) is logarithmic and 30% converted to a logarithmic scale is not exactly what one would expect.
The pH scale measures how acidic or how basic a solution is. It goes from 0 (very acidic) to 14 (very basic) over 7 (neutral). But as said, the scale is not linear, it is logarithmic. A solution with a pH of 4 is ten times more acidic than a solution with a pH of 5 and hundred times more acidic than one with a pH of 6. The same the other way round. A solution with a pH of 10 is ten times more basic than a solution with a pH of 9 and hundred times more basic than one of pH 8.

pH scale

pH scale

It is strange that in the article this is expressed in a measure that will be difficult the apprehend by a layman. How many laypersons would have a correct understanding when they been told of a 30% increase in acidity? They would probably be horrified because we are more used to the logarithmic scale.

The article stated that the average acidity (technically, the concentration of the H+-ions in seawater) is 30% more than in the past. So let’s see what this means converted to a pH-value. The pH-value is the negative log of the concentration of H+-ions in a solution.
Historically, the average pH value of the oceans was around 8.2. My knowledge of chemistry is a bit rusty after so many years, but this is how I think it goes:

  1. A pH of 8.2 means: 1×10-8.2 M H+-ions. Or in a more workable form: 6.3×10-9 M (or 0.0000000063 M)
  2. Add 30 percent to that: 0.0000000063 M x 1.3 = 0.0000000082 M
  3. The final pH after increasing it with 30% will be: -log(0.0000000082) ≈ 8.1 (which is the current average pH of seawater)

Coming from 8.2, this leaves us with a pH decrease of about 0.1 over more than a century, which isn’t that impressive at all. The statement about the 30% acidity increase seems to be made especially for impact.

Another thing that strikes me with this kind of releases: not only is the explanation of the acidity rather misleading, the same can be said about the terminology, for example the term ocean acidification. Ocean water has a pH of about 8.1, the arctic water seemingly somewhat less. That is not acidic at all, it is slightly basic. But does it not get more acid? Technically it is possible to say that, but it is closer to the truth to say that the seawater gets slightly less basic or going somewhat more towards neutral. Saying that going from 8.2 to 8.1 or from 8.1 to 7.8 is getting more acidic is rather misleading. Even when the pH drops 0.3 units it is still not acidic. If we take the scenario from the Multi-Ecosystem Comparison-paper (0.0017 pH units per year), it would take about 650 year before getting to neutral. After that it would start to get acidic, assuming we succeed in putting CO2 in the air for the next 650 years.

Then there is of course the mandatory assumed attribution (but look at the experts-statement, the consensus seems to been cracking a bit…):

At almost 400 parts per million (ppm), there is now 40 percent more carbon dioxide in the atmosphere than before the industrial era began. Almost all experts say the rise is linked to the burning of fossil fuels.

This I found an very odd statement:

“Ocean acidification is likely to affect the abundance, productivity and distribution of marine species, but the magnitude and direction of change are uncertain.”

So there is the possibility it doesn’t effect abundance, productivity and distribution and even if it does, no statements can be made about how much or in what direction? I can be wrong, but this looks to me: we don’t know squat.

And finally, look at the end of the article:

The report will be presented to Arctic governments at a meeting in Sweden next week attended by U.S. Secretary of State John Kerry and Russian Foreign Minister Sergei Lavrov, among others.

The article has probably more to do with politics than with science.

What difference does it make anyway?

Whatsthedifference

In the previous two post I looked at the original hockey stick and its very last incarnation. Both hockey stick shapes seemed to be artifacts of the methods used, not of the underlying data. But, you could say, “Even if this uptick doesn’t follow from the data in those two graphs, we measure surface temperatures more than a century and the way the temperatures go is up. If the hockey stick graph doesn’t tell the story, the measurement data surely do! So what difference does it make anyway?”.
I seen this remark popping up at several discussions. At first I was puzzled by such statements, but now I think it fails to take into account what is really at issue. Let’s look into it in more detail.

There are two data sets in play here. The first is the proxy data set, which consists of proxy data like tree rings (Mann’s hockey stick) or ocean sediment core data (Marcott’s hockey stick). The second set is the instrumental record which consists of temperature measurements with thermometers.

Proxy data is NOT real temperature data. Previously I assumed it was, because I knew that for example in a good year the rings of a tree will be wider than in a colder year. Although this is definitely true, it is also true that there are other influences on tree rings like moisture, nutrition, diseases, pests, competition with other plants/trees, interactions with wildlife, weather events and who know how many other elements that are important in the health of that tree. In that sense, the width of the tree ring is not only dependent on temperature, but also on these other influences. This means the temperature signal is diluted in the proxy data and not directly comparable with real temperature data. What could be said is that the conditions for that tree were better or worse during time, not necessarily that temperatures went up or down. This proxy data will consist of a temperature signal, but it will be noisy data (the temperature signal probably is a big part of it, not necessarily a constant part).

Thermometers on the other side have a very good temperature signal. When the temperature goes up, the substance they contain (alcohol, mercury, metal) will expand. When they cool, that substance will contract. The higher the temperature, the bigger the expansion. The lower the temperature, the bigger the contraction.
After the measurements, it becomes more complicated with issues like the UHI (Urban Heat Island) effect on the measurements and the further processing of this data (do they really are representative for global or Northern Hemisphere temperatures), but that is a different story altogether.

Another issue in this comparison is the resolution. For example, the Marcott hockey stick has a resolution of more than 300 years. The instrumental record has a resolution that is much higher and could be described with a resolution of one day. Even if we bring that to a year, even 10 or 20 years, it is a much higher resolution than the proxy data set. If the instrumental record data were somehow put behind the proxy data and treated the same way as the proxy data, it would been barely 1 (one) measly point and probably not even placed high in the graph either.

As far as I know, there is no dispute that the world has being warming since 160 years. Temperatures are being measured for some time now and although we are now in a flat-lined region, generally the trend since 1850 was upwards. But that is not what these two hockey stick graphs were trying to say here. The issue they want to prove is that the last century is unusually warm compared to previous eras. According to their statements it hasn’t happened in let’s say the last 1,000 (Mann’s hockey stick) or 11,300 years (the Marcott hockey stick) and therefor it has the human fingerprint all over (because of humans emitting more and more CO2 into the atmosphere).

Let’s keep focus on what is really being said here. At issue in the hockey sticks is the uniqueness of the warming, not the fact that it warmed. We already know it warmed, but we don’t know if this didn’t happen before and the data given by these two studies are not sufficient to base that conclusion on. Even if it would have happened in the past, these methods will not be able to show this. When this uniqueness within the long time frame doesn’t follow from the data, it makes no sense to prove this with the incredibly short data set we have with the instrumental record.

Another issue that came to light with the Marcott paper: making the claim that the last 100 years are unprecedented (in the press release) and later saying the non-robustness of the last 100 years doesn’t matter because the instrumental record could well prove it (in the FAQ), is not really honest. The claim made was exactly about that non-robust data, when in reality the data of the graph was not saying much about the last 100 years, even seem to conclude that this data is useless for this current period. If the available evidence doesn’t support a claim, then one shouldn’t make that claim.

Returning from this to the initial question: what difference does the non-correctness of the last part of the hockey stick graphs make, because we know the earth has warmed the last 160 years anyway? As seen above, that is a false premise because that was not the thing that the hockey sticks wanted to prove anyway. But there is more to it than that and it was the statements about the Marcott paper that let me to notice this. The initial question diverts the attention from the strong statements that were made in the press. Just let me turn the question around: if the proxy data has to be tortured in order to get it into a hockey stick shape, how much signal of our current temperatures is there really in the proxy data set? To put it in other words: how much really is this an “independent” confirmation of our current temperatures anyway?

In the end, does it matter? For those who have read the papers, probably not. If they saw the articles in the press, they could put this into context. But it does matter for the laymen who only got to see the articles in the press and were yet again confirmed in their beliefs, without realizing that the papers themselves didn’t warrant those conclusions at all.