The accuracy of the proof or the accuracy of the selection?

cook presentation 97% consensus

There wasn’t anything new in the Cook lecture, yet there was one thing that showed me a new way of looking at the consensus. I already knew there were different studies and I knew they show about the same results, but I had not yet seen the results shown in one image.

If I would look at it with the eyes of somebody who was ignorant on the matter, I would be really impressed by such results. They get the same result every time, even with different methodologies. Looking like that, I would surely think that this 97% value is a pretty plausible figure, a robust quantification of the consensus.

But by looking at those numbers next to each other, it dawned on me. My first thought was: “No way!”. I looked into the methodologies of those three studies earlier and I must admit that I have a hard time believing that with such crude methodologies and rather ambiguous statements, one could get about the same results with just a couple tenths of a percent difference!

Let’s first look at how the three studies were done.

Zimmerman 2009
Doran and Zimmerman: an invitation to participate in a survey was sent to 10,257 Earth scientists. Two questions were asked. The first one was:

When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen or remained relatively constant?

and the second one:

Do you think human activity is a significant contributing factor in changing mean global temperatures?

They received 3,146 completed surveys from which 79 scientists listed climate science as their area of expertise and who also published more than 50% of their peer reviewed papers on the subject of climate change. 75 scientists from 77 answered the second question with “Yes” (the other two didn’t answer “Risen” on the first question and were not presented the second one). Giving the 97.4% figure.

Anderegg et al (2010)
Anderegg et al compiled a database of 1,372 climate researchers and classified them either as convinced by the evidence for anthropogenic climate change or unconvinced by the evidence. They found 903 convinced, 472 unconvinced and 3 scientists were classified in both groups. They found that from the unconvinced group consisted of only 2% of the top 50 climate researchers as ranked by expertise (number of climate publications), 3% of researchers of the top 100, and 2.5% of the top 200, excluding researchers present in both groups. Giving their 97.5% figure.

Cook et al (2013)
Cook et al examined 11 944 climate abstracts from 1991-2011, compiled using the search terms global climate change or global warming. They found that 66.4% of abstracts expressed no position on AGW, 32.6% endorsed AGW, 0.7% rejected AGW and 0.3% were uncertain about the cause of global warming. They then tossed out all the abstracts that expressed no opinion and found that 97.1% of the rest endorsed the consensus position.
In a second part of the study they sent invitations to the scientists to self rate their papers. A small number did so and the result was 97.2% endorsing the consensus.

Now we know how the studies were performed, we could ask the question: were they counting the same thing? Not exactly. Zimmerman counted active climate scientists that answered “Yes” on their second question. Anderegg counted the number of papers from the climate researchers who were convinced by the evidence. Cook counted papers from all kinds of scientists (climate related or not), assuming endorsement or rejection of the consensus based on what was found in the title and abstract of the paper.

It also seems to me that the conclusions were based on a carefully constructed selection, in two cases even after throwing out a big chunk of the data (66.4% for the first part of the Cook study and even a whopping 97.5% for the Zimmerman study!).

Obviously they were not counting the same things and were very selective in what they included in the count. Zimmerman counted for example only the active climate scientists and tossed out everything else because they thought that the experts were the best measure for the consensus. If they didn’t do that and also counted the rest of the completed surveys, that would result in a much lower count. So Zimmerman had to resort using only a minute subset of the gathered data in order to come to the magical 97%.

Cook et al on the other hand counted all papers of all kind of scientists. Yet he also comes to the exact same number. So why suddenly the opinion of those active climate scientists aren’t the best measure for the consensus anymore, like they stated in the Zimmerman study?

The Zimmerman paper suggested that 97% of the active climate scientists endorsed the consensus, yet if you look at the number of unconvinced scientists in the Anderegg paper, they counted 472 out of 1,372 scientists who were unconvinced by the evidence. That is much more than 3% if you ask me. They then looked at the papers the unconvinced wrote and, io and behold, they found that 97% versus 3% figure again.

The thing is, these are not only different methodologies, these are also different groups of scientists and values they were querying. How would they fare if they were to compare the exact same things with their different methodologies? Would they all still come up with 97%? With the same accuracy of a couple tenths of a percent? Just wondering.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s