When I read the new Cook et al 2018 paper for the first time, the one thing that stood out was that the example arguments were simplified versions of skeptical arguments, stripped down of any nuance and context, therefor not representative anymore. I already foresaw many posts in my future about these fabrications…
In the meanwhile I found the discussion of Barry Woods on Twitter, tirelessly calling out the many misrepresentations in the paper. The reaction of some of his opponents, that this doesn’t matter because the compiled arguments are fallacious anyway, puzzled me. I couldn’t grasp that they were just okay with:
- The authors (or Cook and the SkS team) coming up with simplified, unnuanced arguments based on what they think their opponents believe
- then Cook et al show that these simplified, unnuanced arguments are logically fallacious
- thus providing proof that their opponents are wrong and therefor should be safely ignored when it comes to those issues.
That is about as close as one can get to a straw man argument. For those who are not familiar with this type of fallacy, according to wikipedia the definition of a straw man argument is (my emphasis):
A straw man is a common form of argument and is an informal fallacy based on giving the impression of refuting an opponent’s argument, while actually refuting an argument that was not presented by that opponent.
The examples Cook et al used were textbook examples of this type of argument, but the defenders of the paper were undeterred by it or maybe did not understand the concept. It seemed to shed of them like water off a duck’s back. I couldn’t really understand that, given that it is pretty clear for everybody to see.
Until I found following tweet:
Of course! How could I been so shortsighted? Until then I was solely focused on whether the arguments were actual arguments from “denialists” and whether the simplified, unnuanced arguments without context were representative for the claims found in the wild. Because of this focus, I failed to understand that the goal of the paper could be different from faithfully representing “denialist” arguments. Re-reading the paper with that in mind gave me an aha moment.
Before I go on, I still believe that Cook et al blatantly (and probably even knowingly) misrepresent the arguments of their opponents and therefor are attacking straw men, but this focus distracts from understanding what the paper is about. That being of my chest, let’s continue with the paper.
The paper starts with the usual problem definition:
- Misinformation can have significant societal consequences
- There is an overwhelming scientific consensus
- There is however little awareness of the scientific consensus among the general public
- Therefor support for mitigation policies is stalled.
Luckily there is a solution: the inoculation theory. This can neutralize the influence of misinformation “by explaining the techniques used to distort the facts”.
The last piece of the puzzle is the list of 42 common “denialist” claims in the supplementary data. Each item is a statement, followed by the analysis why this statement is a fallacy. These items could be considered as a weaker form of the arguments found in the wild. These items on the list are “analysed” and “found to be false”. Therefor they could be used to “explain the techniques used to distort the facts”. Meaning, when an argument is found in that list, it could show the technique used in that argument, therefor weakening or even eliminating its effect. Just as with an inoculation in modern medicine, it is not necessary to use the real arguments. That would be too messy and would complicate things.
That list will probably be used by those with no(t much) knowledge or background of the issue yet needing a quick-and-dirty way of evaluating claims, for example communicators (this is an actual target group of this list). For them the list will be a resource that they can fall back to. If they feel insecure about a claim, then they can search that list. This is how I thinks it works in practice:
- A communicator encounters something that might be a “denialist” claim
- He/she looks into the list of 42 of fallacious arguments and tries to find something that fits that claim
- When there is something similar in the list ⇒ it is assumed misinformation and the argument could safely be dismissed.
In theory this may seem okay, but the arguments that could be found in the wild are much more nuanced than those provided by the list. The items on that list have no nuances and no context is considered. For example, if someone claims that “there have been other scientific consensuses that have been wrong so we can’t rely solely on the consensus on climate change”, then it will land us in the fourth row. According to the analysis, this is because it has the hidden premise that “if other consensuses proved untrustworthy, then the consensus on AGW must be untrustworthy”.
If that is the actual premise that the argument is build on, then yes, this is most definitely a logical fallacy. But if that hidden premise is not the starting point, but something else is (for example, some similar consensuses have been wrong in the past → we have to be careful with such a consensus and not just accept it at face value), then this is a new construct and this does not necessarily results in a logical fallacy. If that communicator just ticks this argument from the list because it looks similar, then that claim will be considered as misinformation, which is not necessarily the case. It is easy to miscategorize arguments if one doesn’t take care of the actual argument or what is exactly in the analysis of that item.
Personally, it think this is a dangerous methodology for the target group it is designed for. It gives the misleading idea that the debate is rather straight forward and could easily be categorized in 42 slick items. It becomes tricky when one has to deal with real arguments that have nuances and are made in a certain context. Making it not as black & white as it is presented.
I don’t think this technique will empower communicators and educators by means of critical thinking. On the contrary, it will lead to uncritically following the vision of the authors/skepticalscience that denialists are wrong because when they looked at the arguments they think the denialists believe in, then they only found logical fallacies.
Michel,
You have correctly identified a core issue. Climate alarmists are able to roughly align one of the logical fallacies and dismiss something that they dislike. The purpose of the paper is that the logical fallacies can be used to “innoculate” against ideas that run contrary to the consensus. That is to vaccinate, as if different ideas or perspectives were a disease. To take the analogy further, the idea of absorbing the “dead” arguments, is to bolster one’s immune system to the “live” arguments.
Two questions arise.
1. Are they qualified to judge other arguments?
I think not. On the basis that in promoting the ideas of consensus, the alarmists like John Cook’s paper cannot read what the papers like Doran and Zimmerman 2009 or Cook et al 2013 actually proclaim as a “consensus”. For instance, the Polar Bear paper of last year cited these papers in support of the following claim
None of the papers support that statement, and certainly not the first two.
2. Is there an alternative approach?
I believe there is. If there are issues that can be identified in the method of argument, then you can learn from it, and improve your own argument. I wrote a few notes on Fundamentals that Climate Science Ignores a few years ago. For instance, making distinctions between “positive” and “normative” statements, or assessing the quality and relevancy of arguments in support of a conjecture, help filter the better arguments from the weak ones. We can learn from the errors of others, which is always better than self-assessment on our own terms.
LikeLike
The references that Cook makes to consensus papers are rather interesting. In the Alice-in-Wonderland paper he claimed that global warming “presents a global problem”, although none of the referenced papers came even close to investigating that. In this paper he references Doran & Zimmerman, Anderegg, Carlton and his own 2013 paper. While he seriously weakened that claim in this paper, I think he is still overstating what those papers actually investigated. This will be the subject of next post on clear definitions (or the lack thereof)…
LikeLike
Yes, it’s always dangerous to teach people that what they need to do when presented with new ideas…is to dismiss it without thought if it’s remotely close to any bullshit talking point someone’s disproved already. An intelligent person could spend all day crafting “similar” arguments that are easily verified and true…or ones that are similar to “approved” ideas that are blatantly false.
This all kind of breaks down to the real problem with many modern movements…they need to talk with people, engaging with them…not at people, simply saying things with no intent to learn anything about the other person’s viewpoint, criticisms they might have, or indeed anything they might have to say.
LikeLike
Yes, it cuts both ways. Learning how to deal with logical fallacies could also be used to analyze the alarmist arguments, which are plentiful.
LikeLike