Monthly Archives: December 2013

Global warming will intensify drought … Possibly. Maybe. Perhaps.

drought

Last IPCC report assigned only low confidence on an increase in frequency of droughts. In the skeptic blogosphere this became quickly known and distributed. The mainstream media didn’t seem to pick this up, it was obviously not in the press release. A couple days ago I came accross an article that seemed to defy this statement. It was the article Global warming will intensify drought, says new study from John Abraham.

I was rather curious why droughts would be increasing. Did some new evidence popped up since AR5? Was there something that other studies missed? What proof did this particular study found that was contrary the IPCC statements in AR5? Did investigators had new insights? Looking at the name of the author of the article I feared this could be a one-sided article. My fear became reality. Already in the second sentence a clear attribution of human emissions on heat waves and changing rain patterns was stated. This is how the article starts (my emphasis):

When scientists think about climate change, we often focus on long term trends and multi-year averages of various climate measures such as temperature, ocean heat, sea level, ocean acidity, and ice loss. But, what matters most in our day-to-day lives is extreme weather. If human-caused climate change leads to more extreme weather, it would make taking action more prudent.

It is clear that human emissions have led to increased frequencies of heat waves and have changed the patterns of rainfall around the world. The general view is that areas which are currently wet will become wetter; areas that are currently dry will become drier. Additionally, rainfall will occur in heavy doses. So, when you look at the Earth in total, the canceling effects of wetter and drier hides the reality of regional changes that really matter in our lives and our economies.

Something that caught my eye was the changed focus from global climate to local weather. We heard a long time from GLOBAL warming. Now the author has an explanation if there are no global changes. Just say that local weather counts more. Speaking of moving goalposts. If local weather extremes are attributed to global warming, there is no limit on what one can prove.

The study he hinted at in the title was from Trenberth et al.: Global warming and changes in drought. It was behind a pay wall, so not freely available to look at.

It seems to discuss the different ways that droughts are measured. In the paper, Trenberth documented five different teams coming with five different conclusions.

One reason for this was the different base period (1950-2008 and 1950-1979) taken by the investigators. I can imagine this. This is logical. Different base periods can give different results, I seen this before.
A second reason was the limited availability of the data.

Basically, the conclusion was that climate change impacts our live, so it is important to have more data.

Hey…wait…with all the different results of those five different teams and the uncertainties from a lack of data, how could Abraham ever come to the conclusion that, ahem, “Global Warming will intensify drought”?!?! That doesn’t fit.

Confused about this article I searched for more information about the paper. I found another article: Still Uncertain: Climate Change’s Role in Drought from Bobby Magill (ClimateCentral, also not exactly a skeptic site). This article with quotes from the author gives a whole new perceptive on the story then what we seen in the Abraham article:

It’s common for direct connections to be drawn between climate change and the effects of the devastating droughts that have been afflicting the U.S. and other parts of the world over the last decade. A new analysis led by scientists from the National Center for Atmospheric Research says there are still many uncertainties about how climate change is affecting drought globally, though.

The analysis, authored primarily by NCAR senior scientist Kevin Trenberth, concludes that more global precipitation data need to be made available and natural variability needs to be better accounted for to fully determine how climate change is affecting drought worldwide.

“We are really addressing the question of, how is drought changing with global warming and expected to change in the future?” Trenberth said Friday. “To address that question, how is drought changing with global warming, you have to address the question, is drought changing?”

[…]

Trenberth’s paper concludes that changes to the global water cycle in response to global warming will not be uniform. The analysis noted that the differences in precipitation between typically wet and dry regions and seasons will likely increase, but climate change is unlikely to directly cause droughts in the near future.

That is something completely different. The article seem to be about acknowledging the uncertainty of global warming on droughts. And yes, Trenberth assumes the difference drought/precipitation will likely increase. But this is his initial assumption, not yet confirmed by the data (because not enough data is available yet to do so).

To be clear, this is not his conclusion, as Abraham seems to suggest, but the assumption he starts from!

Big difference.

Things I took for granted: when the blades of a windmill turn, it is saving fossil fuel somewhere else

In a strange way I do like windmills. They look majestic and the slow turning of the blades had something meditative. There also is that part of When the blades are turning, it is producing energy and therefor saving fossil fuels somewhere else. It is a nice thought, but is it also true? I didn’t gave it much thought until I experienced something that made me think.

Just before the Millennium I bought a house and renovated it, As someone green at heart I wanted some green technology in it. I thought a solar panel would be nice. When asking around I heard that solar panels that produce electricity were not efficient yet. I was advised to take a solar panel that heats water in a boiler. The principle was really straight forward. The sun heats the water that goes in that boiler. If I need warm water and the water isn’t warm enough, the central heating system would heat it up in stead. If the water was warm enough I had actually water heated by the sun. Simple as that.

It looked promising. Even when there wasn’t that much of sunlight the indicator light on the sun boiler was lighting up. My water was heated by the sun and it saved the gas that otherwise was needed to heat that same water. Nice, it worked!

But there were some problems too. The central heating system didn’t work that well. It took a looong time before the room was warm. This was a new system, so I feared that the capacity of the central heating system was not well calculated and my system was underpowered. Not really threatening, but quite an inconvenience. Probably my own mistake. What goes around comes around.

Other things made me think also. One summer I powered off my central heating. My assumption was that I only used warm water in summer. That shouldn’t be a problem when sun was shining. So no need for a central heating system for heating water. The sun boiler should be enough. That didn’t go well. Although the indicator light was on most of the time (and water was being heated) the water that came out of the system was only lukewarm.

Forward somewhat later. I heard strange noises in the solar system and I unplugged it. But this had quite some consequences. When I put on the heating, the room heated up in a jiffy … the system was not underpowered after all. It worked just fine. The problem seemed to be the solar installation or the link between the solar installation with the central heating system.

But if the central heating had problems to heat the room, then it was running longer, so needing more gas. Was this really true? To try this out I left the solar system off for a longer period. Then came the winter. The central heating wasn’t struggling, indeed less gas was needed to come to the desired room temperature and the room warmed up much faster than before.

I don’t know why the installation was faulty. Maybe there was a fault in production. Or it wasn’t properly installed. Or it didn’t work well with my central heating system. Or I couldn’t expect more from this early generation solar installation. Or my consumption pattern of warm water wasn’t compatible with the system. Or whatever.

The point I want to make is: when the alternative energy source is only a tiny portion of the total energy produced, it is really difficult to know if you are saving energy or not. I didn’t notice that it took more energy than it produced. My thought was that I was saving gas because the indicator light was on, but that was clearly not the case.

The same when generating electricity with wind/solar. Can we be sure that when the blades of a windmill are turning energy is saved somewhere? Wind energy as well as solar energy are intermittent and used in a system that needs a constant power production. This means the production of electricity will depend on the wind or the sun, not on our consumption. We can not trust wind and/or solar to produce energy when it is needed, so backup power needs to be provided. Which is using (fossil) fuel. But with only a few percent of solar and wind in the energy mix, nobody will ever notice if we are either saving fossil fuels, breaking even or using more of it in the process.

Things I took for granted: tree rings are accurate temperature proxies

treeringthermometer

The biggest hurdle I took in my quest to understand the global warming story was a graph called the Hockey Stick. It represents the temperatures over the last 1,000 years. I believed what I was seeing and saw this as a proof of the current anthropogenic global warming. I found it everywhere and was done by scientists, so naively I thought it had to be correct. If temperatures stayed stable for about 1,000 years and the last hundred years took of like a rocket, how much proof do one need to have, considering that carbon dioxide was a greenhouse gas and we emit loads of it into the atmosphere.

Tree-rings showed how warm or cold the climate was, I had no real problem with that, I learned that in primary school. The higher the temperature, the wider the tree-ring. The lower the temperature, the smaller the tree-ring. Just count and measure them and you are done. This was confirmed by scientists who declared that Bristlecones pines were good proxies for temperatures of the past because they live long.

There was the word: proxies. Thousand years ago there were no thermometers. The temperatures back then were measured not by instruments that measure temperature, but by something that is influenced by temperatures. That is a proxy. Temperatures have an influence on the width of tree-rings. True, but are they good indicators of past temperatures?

Looking at the background. Trees are complex organisms. They react on temperature, sure, but also on a bunch of other things. Beside temperature they react on:

  • precipitation
  • nutrients
  • disease
  • wind
  • sunlight
  • competition with other trees
  • competition with animals
  • local variations
  • concentration of carbon dioxide in the air
  • events like storms, lighting,…
  • and probably many, many more…

That makes it different from thermometer. Thermometers measure temperature via the expansion/contraction of a substance, which is representative for the current temperature. There is a direct relation between the expansion/contraction and the temperature. Hence the ability to measure temperature.
On the other hand, temperature has an influence on the width of tree rings, but this is not direct. The tree rings translate the temperature signal, but also those other signals. The temperature signal is diluted in the other signals, there will be a lot of noise in the tree rings. Getting rid of the noise and distilling only the temperature signal will not be possible if nothing is known about the other signals.

There are other things: datasets from trees are sparse. There aren’t that many very old trees. How to compare with modern datasets with real thermometers? Temperature readings of thermometers are read every day at least two times, sometimes a reading every hour. Tree rings one a year. Tree-rings depend on the tree and the period it lived.
A lot of fuss about sparse data, where did I hear that before?

Back to my own story: the Hockey Stick was the difficult hurdle to take. It was difficult because I began to realize that the media brought one sided information about the climate, but wasn’t that far to realize that it was necessary to be critical, not just assume that something was true because the majority thinks so.

Why did I took it for granted? It is a combination of several things:

  • I just looked at the graph and what it meant seemed obvious. I even saw it as proof that humans were causing global warming.
  • I trusted science. I had no reason to believe the scientists were biased in any way or that the information that reached me was one-sided.
  • The graph was found everywhere I searched for historical temperature data. I had no reason to think this could be one-sided information, it seemed straight forward.
  • It was presented as something evident: there was no doubt about this. For example scientists state that “tree rings are a good proxy” because they live long. I didn’t realize that he probably meant a proxy in time, not necessarily a reliable proxy for temperature. Big difference.
  • The basis looks simple and straight forward: warmer: bigger rings. colder: smaller rings. It didn’t seem rocket science. What I forgot to take into account was that a tree is a living thing and reacting on the many influences in its environment. It was brought too simple, but a little bit of thinking would have discovered flaws in the reasoning.

When I think back about this period, I ask myself the big question: how could I ever believed tree rings are thermometers in disguise? How could I ever have believed this stuff?

Not the only one

Last two posts were about the Global Average Temperature of the Earth. Reading it again I realized that I forgot to mention an important thing: not only did I assumed that there was some kind of Global Average Temperature, but also that there was only one dataset “measuring” it. In an accurate way, like in we could trust on it. In my believer years this was the NASA Giss dataset (largely based on the NOAA NCDC dataset).

There is obviously not only one dataset, there are at least five, probably even more. There are the surface based datasets like NASA Giss and HadCRUT, but also satellite based datasets like RSS and UAH. The results of these sets are not in the same league. There is some difference between them. Beside the different method of measuring, there is a difference in base period. The datasets don’t give real temperatures, but anomalies (departure from a base period). The base period for NASA Giss is 1951-1980 (wham in the middle of a cold period), RSS has 1979-1998 and UAH adopted 1981-2010 since 2010. NOAA NCDC uses 1971-2000.

Let’s go back to last post about the warmest November ever. In the media unsurprisingly only the NOAA NCDC dataset was used: November 2013 was 0.78 °C warmer than average over all years starting from 1880. NASA Giss had something similar with an anomaly of +0.77 °C, HadCrut had +0.59 °C, UAH had +0.19 °C (only 9th warmest since 1979) and RSS had +0.13 °C (only 16th warmest since its start).

Now comes the fun part. Alarmists say that the base period doesn’t really matter, it is the trend that counts. But when one is comparing the anomalies with each other then it is important. For example if you take the NASA Giss dataset and compile November 2013 against a different base period you will get different anomalies:

  Smoothing radius
Base period 250 km 1200 km
1951-1980 (NASA Giss) 0.73 0.77
1971-2000 (NOAA NCDC) 0.56 0.60
1979-1998 (RSS) 0.51 0.55
1980-2010 (UAH) 0.38 0.44
1880-2013 (complete period) 0.70 0.76

This is exactly the same dataset. The only difference is the base period and we already get a range of 0.38 °C until 0.77 °C. That’s already half of the supposed warming. Measuring and calculating the global average temperature doesn’t seem to be the exact science that the media want us to believe it is.

So they found the highest number and they threw it into the public as if this was the only dataset that matters. Without mentioning other datasets. Without mentioning the high uncertainty of the measurements before 1979 or even 2003. No balance at all here. To be honest, I was not even surprised to find out that the media broad casted only the highest number of them all. I don’t think it is a coincidence they just took this dataset and sounded the alarm. Keeping the scare alive.

The warmest November everrrrr

wnovember

In previous post I explored my misconception of a long term average global temperature via sparse weather stations. Imagine my surprise when I read the news paper the next day and came across a perfect example of what I was explaining: this year was the warmest November ever. The article seemed to be taken from the VTM news of December 17, 2013 (see screenshot on the right how it is brought). This was the quote of the news (translated from Dutch, my emphasis):

Last November was the warmest in 134 years worldwide and this basically means it was the warmest November ever measured. Normally the average temperature worldwide in November is 12.9 °C. This year it was 0.78 °C warmer.

The numbers are from NOAA via GHCN-M (monthly mean land temperature) combined with ERSST.v3b (Extended Reconstructed Sea Surface Temperature).

Look closely: it is being brought as if this 0.78 °C is accurately measured somehow. This is obviously not the case. This is a statistical analysis in the assumption that these land + ocean measurements represent the real temperature of the Earth. For the public it is tempting to think this is the case or that the calculations solve all sampling problems. At least I did.

But the measurements are only taken in places where people are happy to live. That is called “convenience sampling” and brings bias into the measurements. If this biased measurements are being used to calculate, the result will be a biased global average temperature.

Garbage In, Garbage Out

Earth’s temperature is very complex. There is no place on earth that keeps the same temperature for long. Even regionally temperature can differ quite a lot. How could one ever calculate the correct average temperature of the earth (510 million square kilometer!) with, oh dear, some thousands of weather stations/buoys/drifters and do this with an accuracy of …gasp… 0.01 °C?!?!?!?!

More, what is this compared with? Measurements before the 1980s were very sparse and hardly existing before the 1940s. Think for sea temperatures (3/4 of the earth) taking temperatures from buckets hauled into ships that happen to be there. I find it hard to believe that the calculations with this sparse data results in the same incredibly high accuracy. Could it even can be calculated reliably at all?

Things I took for granted: Global Mean Temperature

earth-thermometer

For the alarmist mind climate can not been more simple. Carbon dioxide levels go up, temperatures goes up. Whatever weather event we encounter is caused or influenced by it. Nothing can even disprove this, there is no room for doubts with this simple logic.

This logic is based on several misconceptions. In some next posts I will explore some of misconceptions I had and how they changed.

The first misconception (being adrressed in this post) is: the earth has a global temperature, this is measured and it is going up in a way that is causing alarm. It even seemed to be accurate enough to capture an 0.8 °C increase in temperature over 160 years.

Just a couple years ago I had no doubt that this was feasible and that the science was mature enough to achieve this kind of accuracy. In my believer years I especially looked at the NASA-Giss dataset. Not really a surprise: this dataset is extensively used by alarmist minds and it had an aura of being trustworthy. Let’s look more into it.

Strange things start to happen when a person start to think logically about the things that surrounds him. I came to the realization that in reality the concept of a Global Temperature does not exist and it seems absurd claiming we could measure it accurately.

To begin with, temperature varies a lot. Not only in location, but also in time. In humans, taking a temperature is really simple. Stick a thermometer in your mouth, read the value and you will have an accurate measurement of the temperature inside the body.

Not so in Earth. There is not one convenient place where the temperature of the earth can be measured. For example in Belgium the South-East part (The Ardennes) has the highest elevation and in general has colder temperatures than the rest of the country. In the North-West there is the North sea and temperatures are moderate there. In the North-East there are more extremes in highs and lows. So even in a tiny country as Belgium there are several different influences on temperatures.

Even on a more local scale there are differences. I live near a hill, smack in the middle of the country. On that hill there is woodland and this has a slightly different temperature than its surroundings. Also a few kilometers from where I live there is another hill with a micro-climate where it is warm enough to cultivate grapes, something which is not possible in the place where I live, even being within walking distance.

There is not only a huge variation according to the location, each point will vary throughout the day and night. It will be coldest in the morning just before sunrise and warmest in the afternoon. Also there will be variation throughout the year (coldest in winter, warmest in summer and spring/autumn in between). And probably also longer cycles of 30, 60, 200 years,…

So, no place on earth will have the same temperature for very long during the day and temperatures will change constantly. Measuring the mean temperature will be quite a challenge. It is not possible to measure temperature at all those places, so the next best thing will be to measure as many points as possible. As been done in surface temperature datasets as GISS and HadCrut.

If all those stations were kept in the same way, this would give us some more idea of the temperature evolution over time (at least for the measured spots), but this is not the case. Stations are dropped, moved, instruments changed, surroundings changed,… Inevitably, the mean temperature will be the result of a statistical analysis, hopefully a good representation of the real temperature.

When one wants meaningful results, samples must be representative of the population. Bias in sampling will influence the end result. The problem here is that surface stations are situated in specific places. In or near cities, airports and other places where people most likely live. Excluding places where people normally don’t live (mountains, deserts,…). In the GISS dataset, most of the samples are taken from the United Stated, some in Europe and Asia and only very few in Africa and Australia.

This is called Convenience sampling. This means there is no real random sampling. Not all points have the same chance of being measured. Although convenience sampling has it merits, it is definitely not the right way to sample for a mean temperature. Especially when instruments/locations/… change over time.

Sampling in convenient places means sampling in/near cities and airports, therefor attributing to Urban Heat Island effect. Due to pavements/asphalt/buildings more heat is accumulated during the day and irradiated at night, therefor leading to higher temperatures than without these constructions. This could be compensated, but this will mean starting from assumptions. The more the assumptions agree with reality, the more accurate the result. But how to correctly compensate for all this bias?

This is not the only bias. I already learned about other siting biases like weather stations located next to air conditioner units, close to buildings and parking lots, even one on the roof of a building. These things undoubtedly will have an influence on the temperature reading and on the results after the calculations. Discovering this measurement bias was my first turning point from a believer to a skeptic view.

The ultimate question will be: how much does this non random sampling matters? That is an open question. Maybe the rest of the potential measurements cancels the bias of measurements out. But then, maybe not. Systematic bias is very unlikely to cancel out. If one want to have a result from this incomplete data it will necessary to make assumptions about the quantity of the bias.

Look at how the GISS dataset morphed over a couple decades from a cycle to almost a straight line. Which gives the impression that the scary result is dependent on new assumptions, not new measurements.

That is only land temperature. Earth is covered 75% by water. Measuring temperatures was first done by sticking a thermometer in a bucket of water drawn from the ocean over automatic systems of measuring the temperature of water in the intake port of large ships to buoys. It went from very scarce data in the past to more detailed information from 2003 (Argo).

What about satellite data? Coverage is much better, although not 100% of the surface (there are slices that aren’t covered and there is a gap at the pole). But these are not the datasets being used by alarmists and only 30 years worth of data.

But, but, doesn’t the Giss dataset is temperature anomaly, not absolute temperatures? Sure, it is and has it advantages and disadvantages. Maybe more on this in a later post. In Giss the result is the difference between the measured temperatures against the average temperature between 1951-1980. Smack in a period when there was a new ice age scare. Compare a current temperature with a average low temperature and this current temperature will be over accentuated.

Ultimately, why did I took it for granted? Every time I heard about it, I was used as something evident: “the temperature of the earth is rising”. This made me think it was evident. Science made quite some progress, why wouldn’t it possible that the temperature of the earth could be determined? But the temperature of earth is incredibly complex and ever changing. Now when someone tells me that the temperature of the earth (510 million square kilometers) could be measured with an accuracy of 0.1 °C from biased samples containing the data of a couple thousand stations, I would think it is ridiculous, something not to be taken seriously.

Catastrophically clashing definitions

For many years I was on the comfortable side of the global warming debate. After changing position, I often contemplated why there is so much polarization between the two sides. Considering my own shifted position, I could find a reason: both sides have a different definition about what they mean by global warming or climate change.

As said some posts ago, definitions are very important. It can make the difference of having less, equal and more woodland, depending what definition people have for it and their counting method.

But it is as bad with the terms that forms the heart of the global warming/climate change debate. Let’s first have a look at: Global Warming

At first glance it all seems pretty clear, the temperature of the earth is rising globally. Sure, but temperatures are not rising everywhere on earth. Some places will warm, other will cool, other will stay the same. So global should be defined more clearly. Is it an area? In this case, how much area is enough to speak about global warming?
Or is it an average of the temperatures around the world? In this case, what defines global warming: land, ocean, land + ocean or atmosphere? Which dataset to use: Giss, HadCrut, BEST, UAH, RSS,…? What if one or more are not in agreement?

If this is settled, then what is considered a temperature increase that is deemed catastrophic: 0.5, 0.8, 1, 2, 4 °C or more per century or per doubling of CO2 concentration? Or just everything above 0? On what time frame? And compared against what? A cool period? A warm period? A static period? The last 30 year? 1 complete cycle?

There is a lot of stretch in the terminology. People talking to each other about global warming can be talking about two completely different things. If they don’t realize this, misunderstandings can occur. Even more, an ill defined term can be stretched as one goes along. For example: if temperatures go up, one can talk about global warming. If temperatures of one or more series are not going up, maybe one dataset still goes up and can still confirm global warming. If all series stays the same or go down, one could still define warming as having a higher temperature than previous years/decades, just focusing on the rising part of the cycle.

Then we didn’t even talk about predicted/projected consequences of global warming. Some say hurricanes are increasing in a warming world, other say there is not much confidence. The same with droughts. Some attribute it to global warming, some don’t.

An example of a stretch in a vague definition is the shift from temperatures that were highest in the last x years to warming since 1950. Which excludes the current pause, but also the inconvenient 1930s-1940s. This means it is still possible to talk about a warming. Okay, not now but in the past. But is that what the public understands when they hear this? They would think it is currently still warming, so alarming.

Another example is the focus shifting to single events on a limited area like the drought/forest fires in part of the contiguous United States (just a tiny area on the globe) or a storm like Sandy (ignoring the all time low storm count/intensity).

With no clear definition everything, great or small, can be taken as evidence of global warming.

The same with the term: Climate Change

Also here at first glance it seems pretty clear. There is at least one changing element in climate.
But on the other hand, climate is the average over a longer timespan. Which timespan does one take? 1, 5, 10, 15, 30, 60 years or even longer? Which elements: temperature, precipitation, snowfall, sea level, storms, ice area,… do you think are important? And what if one takes change of weather and take this as proof of climate change? Then one can proof everything.

Change is the norm. If one takes variability in a chaotic system as proof of change, there is no limit on the proof one can accumulate!

In conclusion: with those ill defined terms all holes are completely covered for those who want to sound the alarm. If temperatures are not cooperating, surely there will be some change somewhere, anywhere. Call it moving goalposts, non falsifiability or whatever. But this also means that even in evidence of the contrary, a view can be persistently kept alive.

That is not all. Imagine the confusion when people talk about global warming and actually mean catastrophic anthropogenic global warming. When scientists in the media state that it was warming from the 1950s (which is correct) then the public thinks that it proves “we caused it” and that it is “catastrophic”. Been there, done that. Adding to the “overwhelming” evidence that (catastrophic) global warming is happening and should be prevented.

How could alarmists and skeptics ever talk together constructively when they have no common definitions of “global warming” and “climate change”? There is more agreement between them then is admitted, but their different definitions lets them talk beside each other.

More, if there is no clear definition of what is “global warming” or “climate change”, how can we know when this global warming or this climate change becomes/became/is catastrophic?

When using such vague definitions, one can explain about anything.