Monthly Archives: December 2013

Global warming will intensify drought … Possibly. Maybe. Perhaps.

drought

Last IPCC report assigned only low confidence on an increase in frequency of droughts. In the skeptic blogosphere this became quickly known and distributed. The mainstream media didn’t seem to pick this up, it was obviously not in the press release. A couple days ago I came accross an article that seemed to defy this statement. It was the article Global warming will intensify drought, says new study from John Abraham.

I was rather curious why droughts would be increasing. Did some new evidence popped up since AR5? Was there something that other studies missed? What proof did this particular study found that was contrary the IPCC statements in AR5? Did investigators had new insights? Looking at the name of the author of the article I feared this could be a one-sided article. My fear became reality. Already in the second sentence a clear attribution of human emissions on heat waves and changing rain patterns was stated. This is how the article starts (my emphasis):

When scientists think about climate change, we often focus on long term trends and multi-year averages of various climate measures such as temperature, ocean heat, sea level, ocean acidity, and ice loss. But, what matters most in our day-to-day lives is extreme weather. If human-caused climate change leads to more extreme weather, it would make taking action more prudent.

It is clear that human emissions have led to increased frequencies of heat waves and have changed the patterns of rainfall around the world. The general view is that areas which are currently wet will become wetter; areas that are currently dry will become drier. Additionally, rainfall will occur in heavy doses. So, when you look at the Earth in total, the canceling effects of wetter and drier hides the reality of regional changes that really matter in our lives and our economies.

Something that caught my eye was the changed focus from global climate to local weather. We heard a long time from GLOBAL warming. Now the author has an explanation if there are no global changes. Just say that local weather counts more. Speaking of moving goalposts. If local weather extremes are attributed to global warming, there is no limit on what one can prove.

The study he hinted at in the title was from Trenberth et al.: Global warming and changes in drought. It was behind a pay wall, so not freely available to look at.

It seems to discuss the different ways that droughts are measured. In the paper, Trenberth documented five different teams coming with five different conclusions.

One reason for this was the different base period (1950-2008 and 1950-1979) taken by the investigators. I can imagine this. This is logical. Different base periods can give different results, I seen this before.
A second reason was the limited availability of the data.

Basically, the conclusion was that climate change impacts our live, so it is important to have more data.

Hey…wait…with all the different results of those five different teams and the uncertainties from a lack of data, how could Abraham ever come to the conclusion that, ahem, “Global Warming will intensify drought”?!?! That doesn’t fit.

Confused about this article I searched for more information about the paper. I found another article: Still Uncertain: Climate Change’s Role in Drought from Bobby Magill (ClimateCentral, also not exactly a skeptic site). This article with quotes from the author gives a whole new perceptive on the story then what we seen in the Abraham article:

It’s common for direct connections to be drawn between climate change and the effects of the devastating droughts that have been afflicting the U.S. and other parts of the world over the last decade. A new analysis led by scientists from the National Center for Atmospheric Research says there are still many uncertainties about how climate change is affecting drought globally, though.

The analysis, authored primarily by NCAR senior scientist Kevin Trenberth, concludes that more global precipitation data need to be made available and natural variability needs to be better accounted for to fully determine how climate change is affecting drought worldwide.

“We are really addressing the question of, how is drought changing with global warming and expected to change in the future?” Trenberth said Friday. “To address that question, how is drought changing with global warming, you have to address the question, is drought changing?”

[…]

Trenberth’s paper concludes that changes to the global water cycle in response to global warming will not be uniform. The analysis noted that the differences in precipitation between typically wet and dry regions and seasons will likely increase, but climate change is unlikely to directly cause droughts in the near future.

That is something completely different. The article seem to be about acknowledging the uncertainty of global warming on droughts. And yes, Trenberth assumes the difference drought/precipitation will likely increase. But this is his initial assumption, not yet confirmed by the data (because not enough data is available yet to do so).

To be clear, this is not his conclusion, as Abraham seems to suggest, but the assumption he starts from!

Big difference.

Advertisement

Things I took for granted: when the blades of a windmill turn, it is saving fossil fuel somewhere else

In a strange way I do like windmills. They look majestic and the slow turning of the blades had something meditative. There also is that part of When the blades are turning, it is producing energy and therefor saving fossil fuels somewhere else. It is a nice thought, but is it also true? I didn’t gave it much thought until I experienced something that made me think.

Just before the Millennium I bought a house and renovated it, As someone green at heart I wanted some green technology in it. I thought a solar panel would be nice. When asking around I heard that solar panels that produce electricity were not efficient yet. I was advised to take a solar panel that heats water in a boiler. The principle was really straight forward. The sun heats the water that goes in that boiler. If I need warm water and the water isn’t warm enough, the central heating system would heat it up in stead. If the water was warm enough I had actually water heated by the sun. Simple as that.

It looked promising. Even when there wasn’t that much of sunlight the indicator light on the sun boiler was lighting up. My water was heated by the sun and it saved the gas that otherwise was needed to heat that same water. Nice, it worked!

But there were some problems too. The central heating system didn’t work that well. It took a looong time before the room was warm. This was a new system, so I feared that the capacity of the central heating system was not well calculated and my system was underpowered. Not really threatening, but quite an inconvenience. Probably my own mistake. What goes around comes around.

Other things made me think also. One summer I powered off my central heating. My assumption was that I only used warm water in summer. That shouldn’t be a problem when sun was shining. So no need for a central heating system for heating water. The sun boiler should be enough. That didn’t go well. Although the indicator light was on most of the time (and water was being heated) the water that came out of the system was only lukewarm.

Forward somewhat later. I heard strange noises in the solar system and I unplugged it. But this had quite some consequences. When I put on the heating, the room heated up in a jiffy … the system was not underpowered after all. It worked just fine. The problem seemed to be the solar installation or the link between the solar installation with the central heating system.

But if the central heating had problems to heat the room, then it was running longer, so needing more gas. Was this really true? To try this out I left the solar system off for a longer period. Then came the winter. The central heating wasn’t struggling, indeed less gas was needed to come to the desired room temperature and the room warmed up much faster than before.

I don’t know why the installation was faulty. Maybe there was a fault in production. Or it wasn’t properly installed. Or it didn’t work well with my central heating system. Or I couldn’t expect more from this early generation solar installation. Or my consumption pattern of warm water wasn’t compatible with the system. Or whatever.

The point I want to make is: when the alternative energy source is only a tiny portion of the total energy produced, it is really difficult to know if you are saving energy or not. I didn’t notice that it took more energy than it produced. My thought was that I was saving gas because the indicator light was on, but that was clearly not the case.

The same when generating electricity with wind/solar. Can we be sure that when the blades of a windmill are turning energy is saved somewhere? Wind energy as well as solar energy are intermittent and used in a system that needs a constant power production. This means the production of electricity will depend on the wind or the sun, not on our consumption. We can not trust wind and/or solar to produce energy when it is needed, so backup power needs to be provided. Which is using (fossil) fuel. But with only a few percent of solar and wind in the energy mix, nobody will ever notice if we are either saving fossil fuels, breaking even or using more of it in the process.

Things I took for granted: tree rings are accurate temperature proxies

treeringthermometer

The biggest hurdle I took in my quest to understand the global warming story was a graph called the Hockey Stick. It represents the temperatures over the last 1,000 years. I believed what I was seeing and saw this as a proof of the current anthropogenic global warming. I found it everywhere and was done by scientists, so naively I thought it had to be correct. If temperatures stayed stable for about 1,000 years and the last hundred years took of like a rocket, how much proof do one need to have, considering that carbon dioxide was a greenhouse gas and we emit loads of it into the atmosphere.

Tree-rings showed how warm or cold the climate was, I had no real problem with that, I learned that in primary school. The higher the temperature, the wider the tree-ring. The lower the temperature, the smaller the tree-ring. Just count and measure them and you are done. This was confirmed by scientists who declared that Bristlecones pines were good proxies for temperatures of the past because they live long.

There was the word: proxies. Thousand years ago there were no thermometers. The temperatures back then were measured not by instruments that measure temperature, but by something that is influenced by temperatures. That is a proxy. Temperatures have an influence on the width of tree-rings. True, but are they good indicators of past temperatures?

Looking at the background. Trees are complex organisms. They react on temperature, sure, but also on a bunch of other things. Beside temperature they react on:

  • precipitation
  • nutrients
  • disease
  • wind
  • sunlight
  • competition with other trees
  • competition with animals
  • local variations
  • concentration of carbon dioxide in the air
  • events like storms, lighting,…
  • and probably many, many more…

That makes it different from thermometer. Thermometers measure temperature via the expansion/contraction of a substance, which is representative for the current temperature. There is a direct relation between the expansion/contraction and the temperature. Hence the ability to measure temperature.
On the other hand, temperature has an influence on the width of tree rings, but this is not direct. The tree rings translate the temperature signal, but also those other signals. The temperature signal is diluted in the other signals, there will be a lot of noise in the tree rings. Getting rid of the noise and distilling only the temperature signal will not be possible if nothing is known about the other signals.

There are other things: datasets from trees are sparse. There aren’t that many very old trees. How to compare with modern datasets with real thermometers? Temperature readings of thermometers are read every day at least two times, sometimes a reading every hour. Tree rings one a year. Tree-rings depend on the tree and the period it lived.
A lot of fuss about sparse data, where did I hear that before?

Back to my own story: the Hockey Stick was the difficult hurdle to take. It was difficult because I began to realize that the media brought one sided information about the climate, but wasn’t that far to realize that it was necessary to be critical, not just assume that something was true because the majority thinks so.

Why did I took it for granted? It is a combination of several things:

  • I just looked at the graph and what it meant seemed obvious. I even saw it as proof that humans were causing global warming.
  • I trusted science. I had no reason to believe the scientists were biased in any way or that the information that reached me was one-sided.
  • The graph was found everywhere I searched for historical temperature data. I had no reason to think this could be one-sided information, it seemed straight forward.
  • It was presented as something evident: there was no doubt about this. For example scientists state that “tree rings are a good proxy” because they live long. I didn’t realize that he probably meant a proxy in time, not necessarily a reliable proxy for temperature. Big difference.
  • The basis looks simple and straight forward: warmer: bigger rings. colder: smaller rings. It didn’t seem rocket science. What I forgot to take into account was that a tree is a living thing and reacting on the many influences in its environment. It was brought too simple, but a little bit of thinking would have discovered flaws in the reasoning.

When I think back about this period, I ask myself the big question: how could I ever believed tree rings are thermometers in disguise? How could I ever have believed this stuff?

Not the only one

Last two posts were about the Global Average Temperature of the Earth. Reading it again I realized that I forgot to mention an important thing: not only did I assumed that there was some kind of Global Average Temperature, but also that there was only one dataset “measuring” it. In an accurate way, like in we could trust on it. In my believer years this was the NASA Giss dataset (largely based on the NOAA NCDC dataset).

There is obviously not only one dataset, there are at least five, probably even more. There are the surface based datasets like NASA Giss and HadCRUT, but also satellite based datasets like RSS and UAH. The results of these sets are not in the same league. There is some difference between them. Beside the different method of measuring, there is a difference in base period. The datasets don’t give real temperatures, but anomalies (departure from a base period). The base period for NASA Giss is 1951-1980 (wham in the middle of a cold period), RSS has 1979-1998 and UAH adopted 1981-2010 since 2010. NOAA NCDC uses 1971-2000.

Let’s go back to last post about the warmest November ever. In the media unsurprisingly only the NOAA NCDC dataset was used: November 2013 was 0.78 °C warmer than average over all years starting from 1880. NASA Giss had something similar with an anomaly of +0.77 °C, HadCrut had +0.59 °C, UAH had +0.19 °C (only 9th warmest since 1979) and RSS had +0.13 °C (only 16th warmest since its start).

Now comes the fun part. Alarmists say that the base period doesn’t really matter, it is the trend that counts. But when one is comparing the anomalies with each other then it is important. For example if you take the NASA Giss dataset and compile November 2013 against a different base period you will get different anomalies:

  Smoothing radius
Base period 250 km 1200 km
1951-1980 (NASA Giss) 0.73 0.77
1971-2000 (NOAA NCDC) 0.56 0.60
1979-1998 (RSS) 0.51 0.55
1980-2010 (UAH) 0.38 0.44
1880-2013 (complete period) 0.70 0.76

This is exactly the same dataset. The only difference is the base period and we already get a range of 0.38 °C until 0.77 °C. That’s already half of the supposed warming. Measuring and calculating the global average temperature doesn’t seem to be the exact science that the media want us to believe it is.

So they found the highest number and they threw it into the public as if this was the only dataset that matters. Without mentioning other datasets. Without mentioning the high uncertainty of the measurements before 1979 or even 2003. No balance at all here. To be honest, I was not even surprised to find out that the media broad casted only the highest number of them all. I don’t think it is a coincidence they just took this dataset and sounded the alarm. Keeping the scare alive.

The warmest November everrrrr

wnovember

In previous post I explored my misconception of a long term average global temperature via sparse weather stations. Imagine my surprise when I read the news paper the next day and came across a perfect example of what I was explaining: this year was the warmest November ever. The article seemed to be taken from the VTM news of December 17, 2013 (see screenshot on the right how it is brought). This was the quote of the news (translated from Dutch, my emphasis):

Last November was the warmest in 134 years worldwide and this basically means it was the warmest November ever measured. Normally the average temperature worldwide in November is 12.9 °C. This year it was 0.78 °C warmer.

The numbers are from NOAA via GHCN-M (monthly mean land temperature) combined with ERSST.v3b (Extended Reconstructed Sea Surface Temperature).

Look closely: it is being brought as if this 0.78 °C is accurately measured somehow. This is obviously not the case. This is a statistical analysis in the assumption that these land + ocean measurements represent the real temperature of the Earth. For the public it is tempting to think this is the case or that the calculations solve all sampling problems. At least I did.

But the measurements are only taken in places where people are happy to live. That is called “convenience sampling” and brings bias into the measurements. If this biased measurements are being used to calculate, the result will be a biased global average temperature.

Garbage In, Garbage Out

Earth’s temperature is very complex. There is no place on earth that keeps the same temperature for long. Even regionally temperature can differ quite a lot. How could one ever calculate the correct average temperature of the earth (510 million square kilometer!) with, oh dear, some thousands of weather stations/buoys/drifters and do this with an accuracy of …gasp… 0.01 °C?!?!?!?!

More, what is this compared with? Measurements before the 1980s were very sparse and hardly existing before the 1940s. Think for sea temperatures (3/4 of the earth) taking temperatures from buckets hauled into ships that happen to be there. I find it hard to believe that the calculations with this sparse data results in the same incredibly high accuracy. Could it even can be calculated reliably at all?

Things I took for granted: Global Mean Temperature

earth-thermometer

For the alarmist mind climate can not been more simple. Carbon dioxide levels go up, temperatures goes up. Whatever weather event we encounter is caused or influenced by it. Nothing can even disprove this, there is no room for doubts with this simple logic.

This logic is based on several misconceptions. In some next posts I will explore some of misconceptions I had and how they changed.

The first misconception (being adrressed in this post) is: the earth has a global temperature, this is measured and it is going up in a way that is causing alarm. It even seemed to be accurate enough to capture an 0.8 °C increase in temperature over 160 years.

Just a couple years ago I had no doubt that this was feasible and that the science was mature enough to achieve this kind of accuracy. In my believer years I especially looked at the NASA-Giss dataset. Not really a surprise: this dataset is extensively used by alarmist minds and it had an aura of being trustworthy. Let’s look more into it.

Strange things start to happen when a person start to think logically about the things that surrounds him. I came to the realization that in reality the concept of a Global Temperature does not exist and it seems absurd claiming we could measure it accurately.

To begin with, temperature varies a lot. Not only in location, but also in time. In humans, taking a temperature is really simple. Stick a thermometer in your mouth, read the value and you will have an accurate measurement of the temperature inside the body.

Not so in Earth. There is not one convenient place where the temperature of the earth can be measured. For example in Belgium the South-East part (The Ardennes) has the highest elevation and in general has colder temperatures than the rest of the country. In the North-West there is the North sea and temperatures are moderate there. In the North-East there are more extremes in highs and lows. So even in a tiny country as Belgium there are several different influences on temperatures.

Even on a more local scale there are differences. I live near a hill, smack in the middle of the country. On that hill there is woodland and this has a slightly different temperature than its surroundings. Also a few kilometers from where I live there is another hill with a micro-climate where it is warm enough to cultivate grapes, something which is not possible in the place where I live, even being within walking distance.

There is not only a huge variation according to the location, each point will vary throughout the day and night. It will be coldest in the morning just before sunrise and warmest in the afternoon. Also there will be variation throughout the year (coldest in winter, warmest in summer and spring/autumn in between). And probably also longer cycles of 30, 60, 200 years,…

So, no place on earth will have the same temperature for very long during the day and temperatures will change constantly. Measuring the mean temperature will be quite a challenge. It is not possible to measure temperature at all those places, so the next best thing will be to measure as many points as possible. As been done in surface temperature datasets as GISS and HadCrut.

If all those stations were kept in the same way, this would give us some more idea of the temperature evolution over time (at least for the measured spots), but this is not the case. Stations are dropped, moved, instruments changed, surroundings changed,… Inevitably, the mean temperature will be the result of a statistical analysis, hopefully a good representation of the real temperature.

When one wants meaningful results, samples must be representative of the population. Bias in sampling will influence the end result. The problem here is that surface stations are situated in specific places. In or near cities, airports and other places where people most likely live. Excluding places where people normally don’t live (mountains, deserts,…). In the GISS dataset, most of the samples are taken from the United Stated, some in Europe and Asia and only very few in Africa and Australia.

This is called Convenience sampling. This means there is no real random sampling. Not all points have the same chance of being measured. Although convenience sampling has it merits, it is definitely not the right way to sample for a mean temperature. Especially when instruments/locations/… change over time.

Sampling in convenient places means sampling in/near cities and airports, therefor attributing to Urban Heat Island effect. Due to pavements/asphalt/buildings more heat is accumulated during the day and irradiated at night, therefor leading to higher temperatures than without these constructions. This could be compensated, but this will mean starting from assumptions. The more the assumptions agree with reality, the more accurate the result. But how to correctly compensate for all this bias?

This is not the only bias. I already learned about other siting biases like weather stations located next to air conditioner units, close to buildings and parking lots, even one on the roof of a building. These things undoubtedly will have an influence on the temperature reading and on the results after the calculations. Discovering this measurement bias was my first turning point from a believer to a skeptic view.

The ultimate question will be: how much does this non random sampling matters? That is an open question. Maybe the rest of the potential measurements cancels the bias of measurements out. But then, maybe not. Systematic bias is very unlikely to cancel out. If one want to have a result from this incomplete data it will necessary to make assumptions about the quantity of the bias.

Look at how the GISS dataset morphed over a couple decades from a cycle to almost a straight line. Which gives the impression that the scary result is dependent on new assumptions, not new measurements.

That is only land temperature. Earth is covered 75% by water. Measuring temperatures was first done by sticking a thermometer in a bucket of water drawn from the ocean over automatic systems of measuring the temperature of water in the intake port of large ships to buoys. It went from very scarce data in the past to more detailed information from 2003 (Argo).

What about satellite data? Coverage is much better, although not 100% of the surface (there are slices that aren’t covered and there is a gap at the pole). But these are not the datasets being used by alarmists and only 30 years worth of data.

But, but, doesn’t the Giss dataset is temperature anomaly, not absolute temperatures? Sure, it is and has it advantages and disadvantages. Maybe more on this in a later post. In Giss the result is the difference between the measured temperatures against the average temperature between 1951-1980. Smack in a period when there was a new ice age scare. Compare a current temperature with a average low temperature and this current temperature will be over accentuated.

Ultimately, why did I took it for granted? Every time I heard about it, I was used as something evident: “the temperature of the earth is rising”. This made me think it was evident. Science made quite some progress, why wouldn’t it possible that the temperature of the earth could be determined? But the temperature of earth is incredibly complex and ever changing. Now when someone tells me that the temperature of the earth (510 million square kilometers) could be measured with an accuracy of 0.1 °C from biased samples containing the data of a couple thousand stations, I would think it is ridiculous, something not to be taken seriously.

Catastrophically clashing definitions

For many years I was on the comfortable side of the global warming debate. After changing position, I often contemplated why there is so much polarization between the two sides. Considering my own shifted position, I could find a reason: both sides have a different definition about what they mean by global warming or climate change.

As said some posts ago, definitions are very important. It can make the difference of having less, equal and more woodland, depending what definition people have for it and their counting method.

But it is as bad with the terms that forms the heart of the global warming/climate change debate. Let’s first have a look at: Global Warming

At first glance it all seems pretty clear, the temperature of the earth is rising globally. Sure, but temperatures are not rising everywhere on earth. Some places will warm, other will cool, other will stay the same. So global should be defined more clearly. Is it an area? In this case, how much area is enough to speak about global warming?
Or is it an average of the temperatures around the world? In this case, what defines global warming: land, ocean, land + ocean or atmosphere? Which dataset to use: Giss, HadCrut, BEST, UAH, RSS,…? What if one or more are not in agreement?

If this is settled, then what is considered a temperature increase that is deemed catastrophic: 0.5, 0.8, 1, 2, 4 °C or more per century or per doubling of CO2 concentration? Or just everything above 0? On what time frame? And compared against what? A cool period? A warm period? A static period? The last 30 year? 1 complete cycle?

There is a lot of stretch in the terminology. People talking to each other about global warming can be talking about two completely different things. If they don’t realize this, misunderstandings can occur. Even more, an ill defined term can be stretched as one goes along. For example: if temperatures go up, one can talk about global warming. If temperatures of one or more series are not going up, maybe one dataset still goes up and can still confirm global warming. If all series stays the same or go down, one could still define warming as having a higher temperature than previous years/decades, just focusing on the rising part of the cycle.

Then we didn’t even talk about predicted/projected consequences of global warming. Some say hurricanes are increasing in a warming world, other say there is not much confidence. The same with droughts. Some attribute it to global warming, some don’t.

An example of a stretch in a vague definition is the shift from temperatures that were highest in the last x years to warming since 1950. Which excludes the current pause, but also the inconvenient 1930s-1940s. This means it is still possible to talk about a warming. Okay, not now but in the past. But is that what the public understands when they hear this? They would think it is currently still warming, so alarming.

Another example is the focus shifting to single events on a limited area like the drought/forest fires in part of the contiguous United States (just a tiny area on the globe) or a storm like Sandy (ignoring the all time low storm count/intensity).

With no clear definition everything, great or small, can be taken as evidence of global warming.

The same with the term: Climate Change

Also here at first glance it seems pretty clear. There is at least one changing element in climate.
But on the other hand, climate is the average over a longer timespan. Which timespan does one take? 1, 5, 10, 15, 30, 60 years or even longer? Which elements: temperature, precipitation, snowfall, sea level, storms, ice area,… do you think are important? And what if one takes change of weather and take this as proof of climate change? Then one can proof everything.

Change is the norm. If one takes variability in a chaotic system as proof of change, there is no limit on the proof one can accumulate!

In conclusion: with those ill defined terms all holes are completely covered for those who want to sound the alarm. If temperatures are not cooperating, surely there will be some change somewhere, anywhere. Call it moving goalposts, non falsifiability or whatever. But this also means that even in evidence of the contrary, a view can be persistently kept alive.

That is not all. Imagine the confusion when people talk about global warming and actually mean catastrophic anthropogenic global warming. When scientists in the media state that it was warming from the 1950s (which is correct) then the public thinks that it proves “we caused it” and that it is “catastrophic”. Been there, done that. Adding to the “overwhelming” evidence that (catastrophic) global warming is happening and should be prevented.

How could alarmists and skeptics ever talk together constructively when they have no common definitions of “global warming” and “climate change”? There is more agreement between them then is admitted, but their different definitions lets them talk beside each other.

More, if there is no clear definition of what is “global warming” or “climate change”, how can we know when this global warming or this climate change becomes/became/is catastrophic?

When using such vague definitions, one can explain about anything.

The two faces of consensus

facesofconsensus

When searching for information to be used in previous post I found an article in randi.com about risk, emotion and global warming. Reading it, I was catapulted back to my alarmist years. It gave an accurate insight in how I was thinking only five years or so ago. It started like this:

I am not going to lie to you; I am freaked out about climate change. At least politicians today can say something to the effect of “it’s something that the next generation must face down,” seemingly abdicating their own responsibility. But I am a part of that next generation. Climate change is something that I am going to have to deal with, and I’m not sure if my generation and I can.

[…]

Moving forward I am going to assume two things. First, that global warming is happening and is human caused (as per the scientific consensus), and second, that most projections about the effects of climate change are grim. That is to say, whatever comes of climate change, it won’t be good. […]

I don’t want to throw stones at anyone. I realized that just about five years ago I was thinking exactly the same things. I can clearly feel his pain. Maybe freaking out would be a rather strong term, but at that time I found the changing climate (then mostly called “global warming”) worrying.

I also assumed that climate change was happening and that our future looked grim if nothing was done quick. This was rooted in my, at that time, unshakable belief in the scientists and the models they used. I also thought they exaggerated their findings, but that they were nevertheless correct.

At the base of it all was “the consensus”. I had other priorities and believing in a broad agreement between scientist was really reassuring. I didn’t had to check anything, didn’t need to think for myself. Just believe what was being told.

To be honest, I have no real problem with the consensus as a concept. It is an agreement of a group of scientists and therefor I can agree that in a field that was studied extensively such an agreement could exist. But I know that there is not something as a “scientific consensus”. As Einstein once said: “There only have to be one to prove me wrong”. The consensus is not part of the scientific method, nor does the consensus prove anything. To believe that the consensus is right, it is necessary to also believe that there is conclusive evidence. It is just a logical fallacy to claim that there is conclusive evidence because there is a consensus between “experts”. There is not much value in arguing about whether there is a consensus or that the consensus proves something.

I do have a problem with a consensus specific in climate science. This for several reasons.

Climate science is a rather young science and only since the last three decades detailed information was gathered. Climate is weather over a longer timespan. 30 years is only about half of a cycle. Before that there was only sparse data, not intended to use as a tool for measuring global temperatures and therefor the data is prone to interpretation. Just look at the ever changing Giss dataset.

More, how in the world can there exist a consensus in a science consisting of multidisciplinary fields studying a complex, chaotic system and sparse historical data available? The uncertainty should be high. Increasing uncertainty (for example about the role of CO2) is more likely to decrease agreement.

There is also another dimension: consensus can be used to stifle debate, close out opponents with another vision or with challenging viewpoints. It is an often heard message: “the debate is over”. If the consensus is about avoiding talking about the evidence it is also just a logical fallacy.

Last, but not least, what is the value of a consensus in a group of scientists that was selected by politicians with a special goal?

Apparently my view on the issue changed quite a lot during the last years and I found myself on the other side of the debate. That doesn’t put me in the most comfortable position, but I think this was the right way to do. Just taking things for granted undoubtedly did put me in a comfortable position, but this can’t compare with the insights gained when looking at both sides of the issue.

Creating attribution out of nothing

polargate creation

A couple days ago I heard the news that Charles Monnett (an Arctic Wildlife biologist) was forced to retire as part of a settlement with a federal agency. Charles Monnett, together with Jeffrey Gleason, became known for spotting four dead polar bears after a storm in 2004 while studying bowhead whales and publishing a paper in the journal Polar Biology on these observations. This was explained by the media as proof of the negative influence of global warming on polar bears.

I remember it well. I was still a believer at the time and I found this “evidence” worrying. Believing in the validity of the results of this paper, I took this as proof that polar bears were not doing well because of our emissions. Adding to the overwhelming evidence of the wrongdoing by us, humans. I heard it and believed it was true, maybe somewhat exaggerated, but true nevertheless.

In 2011 Monnett was being investigated by the Interior Department’s Inspector General’s office, culminating into his retirement now. Reading about this I came across several comments that wrote that the paper in itself was not alarmist in nature. In 2011 I wasn’t really interested to look into it. Now it seems a nice moment to look into it, seeing what it was based on and how the media was broadcasting it. Just to see if my trust in this result was justified nine years ago.

The paper was quickly found. Indeed, it was not really an alarmist paper. What it basically said was that, although polar bears are good swimmers, they seen more swimming bears (almost 20%) than was observed in the past (almost 4%). Less ice means an increase in wave height and with more swimming bears that have to swim over a larger distance this could mean more drownings. Especially for mothers (with cubs). After a storm they found dead polar bears during their survey, something they claim wasn’t observed in the past (this could have other explanations though: considering that they themselves made the observation by chance, sampling bias comes to mind).

An interesting part of the paper was the “creative” calculation of the survival rate of the (swimming) polar bears. This is how it was done:

Only a small total number of bears was seen on > 14,000 km of transect surveyed in 2004, thus limiting our ability to provide accurate estimates of polar bear mortality and associated confidence intervals (see McDonald et al. 1999; Evans et al. 2003). If, however, data are simply spatially extrapolated, bear deaths during a period of high winds in 2004 may have significant. Our observations obtained from 34 north-south transects provide coverage of approximately 11% of the 630 km wide study area assuming a maximum sighting distance for swimming/floating polar bears of 1 km from the aircraft (coverage = 630 km/ (34 transects x 2 km wide transect) = 10.8/ of study area). Limiting data to bears on transect and not considering bears seen on connect and search segments, four swimming polar bears were encountered in addition to three dead bears. If these bears accurately reflect 11% of bears present under these conditions, then 36 bears may have been swimming in open water on 6 and 7 September, and 27 bears may have died as a result of the high offshore winds. These extrapolations suggest that survival rate of bears swimming in open water during this period was low (9/36 = 25%).

Basically they said:

  1. 11% of the area was surveyed
  2. 4 bears were seen swimming in this area between 6 – 8 September 2004. Meaning 4 x 100/11 ≈ 36 bears swimming
  3. 3 dead ones were seen from 14 September 2004 (there was also another dead bear in another area, so in total 4 observed, but 3 in the surveyed area). Assuming those 3 were from the 4 swimming bears, meaning 3 x 100/11 ≈ 27 bears dead
  4. Result: 36 swimming bears – 27 dead ones = 9 survived the storm.
  5. 9/36×100 = 25% survival rate of the swimming polar bears.

All those calculations were not even needed: in their assumption that the covered transects are representative for all the others, then 1 bear survived out of 4, meaning 25%. But 3 dead bears is not impressive compared to the virtual 27 deaths from the calculation.

This was based on a very, very small number of observation (ahem, n=4) and based on a load of assumptions. More, the pictures taken as evidence seemed to contain just some blurry white spots in a blue ocean. Hey, most Big Foot footage is more detailed than these photographs. While the other pictures (of bowhead whales) apparently were more clear. Mmmmmh.

A lot of questions arise. Since 2004 the Arctic ice pack diminished even further and got its lowest value in 2012. If polar bears are drowning in droves, where are the pictures of floating dead polar bears? If there are more swimming polar bears and a survival rate of 25%, where is the evidence for dramatic population declines? If less ice means more swimming and swimming means more drowning and these were the first to be observed, why are there no drowned bears observed in the 1995/7 when there were observations of bears much further away from the ice? Even if these where really dead polar bears, how can such a small set (4 observations in a small area) say something about a global population of 20,000-25,000? Wasn’t this an special event in which it is “likely the creatures drowned in a sudden windstorm that produced 30-knot winds, not for lack of an ice pack” as Gleason acknowledged in the Inspector General interview? Assuming of course that those white dots in the ocean were actual dead polar bears and not for example debris from the storm.

Not really the strong evidence I was expecting to find.

Monnet and Gleason were saying they didn’t mention global warming in the paper, which is correct. They stated that “to date, mortality due to swimming has not been identified as an associated risk” (of course not, they were the first to observe this), but “may become important in the future if the Arctic pack ice continues to regress”. However, alarmist minds should have no difficulty filling in the global warming story line in here.

And filling in they did. Al Gore talked about it in An Inconvenient Truth:

That’s not good for creatures like polar bears that depend on the ice. A new scientific study shows that for the first time they’re finding polar bears that have actually drowned, swimming long distances up to 60 miles to find the ice. They did not find that before.

Complete with a heartbreaking animation of a polar bear that is desperately trying to get on an ice shelf, but it breaks with every try. So, not only the assumption disappeared, but the anecdotal evidence was declared as absolute truth and seamlessly connected with, gasp, global warming. No word about the storm either. It makes it appear as if the polar bears drown because they just can not find any ice anymore. This deperate bear in a vast ocean with nothing to hold on is of course a powerful image and will give an emotional response with many. But when looking at it in the light of its information value, the portrayed image is not true.

The mainstream media did basically the same. Although Monnett and Gleason didn’t attributed the shrinking ice pack directly to global warming, the media had no problem at all to fill in the gap. Most also didn’t mention the storm and attributed the drowned bears directly to global warming.

It didn’t stop there. The paper was cited by the U.S. Fish and Wildlife Service in its 2008 decision to list the polar bear as threatened under the Endangered Species Act.

Although the paper could only give anecdotal evidence, included creative “statistics” and was based on a load of assumptions, it apparently had quite some influence. Nine years ago I trusted that this poor “survival rate” of (swimming) polar bears was rooted in some serious science. This trust I had back then seemed not justified. In the end, it was not the paper that made the connection between drowned bears and global warming. It was indeed Al Gore and the media that made this up.

Electrawinds doesn’t pull the plug (yet)

Just an update on the problems of Electrawinds. Yesterday evening, after a board meeting, Electrawinds decided not to file bankruptcy at the moment. They first will sell one of their projects to pay off some of their debt. There are also interested parties to buy some parts of the company. With a current debt of €362 million, that is probably just postponing the inevitable.

The reactions in the Belgian media were interesting. Critic from economists and politicians was hard. Words like “Megalomania”, “not transparent”, “conflict of interest” and “mismanagement” are flying around.

This is how Ivan Van de Cloot (Chief economics of think tank Itenera) was quoted in a VRT news article (translated from Dutch):

According to [Ivan Van de Cloot] the core of the problem is that there was a build-up of too much debt, more than 362 million, but at the same time the structure of the company was too complex and lacking transparency. The management wanted to keep absolute control in their hands and therefore accepted no or few co-investors, but rather creditors. That comes to haunt them, especially for a company that lives primarily from subsidies.

In addition, a number of characters in the company granted themselves privileges, allowing them to exert possessions of Electrawinds claims. Even if the company goes bankrupt, they would come in the front row to claim those assets.

Or like Jean-Marie Dedecker (founder of the LDD party) said (translated from Dutch):

A company that depends on government subsidies for three-quarters of its revenue, is doomed to go bankrupt.

Politics seemed to have a crucial part in the process. The name of Minister Vande Lanotte and the Flemish Socialist Party (SP.A) are heard frequently in this regard.

From political side, only two (tiny) parties are heard in this issue. One is the Flemish green party, probably because they want to distance them from the debacle and limit the damage. The other is LDD (Lijst Dedecker). Jean-Marie Dedecker is tirelessly shouting it to the world for several years that there was something awfully wrong with the green subsidies, especially those to the offshore wind farms. Until now nobody took notice, but now he has a field day.

Damage has been done. Not only for the (socialist) politicians that will get it difficult in the next 6 months. But also a loss of public support when people start to realize that their government basically just gave Electrawinds the cheque without asking guarantees, even without getting much control in the process.

Is this really new? No, not really. Not only Dedecker was telling something was really wrong with Electrawinds and the numerous subsidies they got. Also, last year two investigative journalists, Wim Van den Eynde and Luc Pauwels, wrote the book “Keizer van Oostende” (Emperor of Ostend, a name they called Vande Lanotte) in which they described the connections between politics and for example Electrawinds via the Minister. This has been known for more than a year now.

Next year being an election year, this could make things very interesting.