Thursday, September 11, 2014

James Taylor versus relative humidity and specific humidity

It appears that the relative humidity and specific humidity continues to trip some people up.  Yes, I'm thinking of the screed James Taylor wrote on Forbes.com on Aug. 20.  In his article, Taylor trumpets two "facts".  First, that relative humidity has declined and second, that specific humidity isn't rising as fast as global climate models predict.  Since climate models assume that relative humidity has stayed constant, Taylor then claims that models are overestimating global warming.  Unfortunately for Taylor, his "facts" don't check out.

For Taylor's relative humidity finding, he linked to a Friends of Science (FoS) graph.  Friends of Science is an oil-funded astroturf group run by Ken Gregory and is filled with various science denier myths (e.g. no warming since 1998) backed with an artfully drawn graph that show a completely flat trend line since 1996.  Edit: I found out that they used RSS data, which was panned by none other than Roy Spencer back in 2011 for showing false cooling due to increased orbital decay.  The fact that FoS tries to pass RSS off as reliable three years later says much about their brand of "science.").

Back to Gregory's relative humidity graph as Taylor cited it.  As Gregory states in his 2013 report, the data comes from the NCEP Reanalysis I, which is the only such dataset that starts in 1948.  That is important, as one of the main limitations in the NCEP Reanalysis I is the radiosonde data it is based upon was not homogenized before 1973 (Elliot and Gaffen 1991).  Homogenization removes non-climatic factors such as changes in location and instrumentation from the data set.  Without homogenization, you get false trends in the data.  In the case of relative and specific humidity data, changes in radiosonde instruments would change measured humidity.  This effectively means that means any comparison between data before 1973 and data after 1973 is invalid.  Yet that is precisely what Gregory did in his graph.  Even better?  Gregory omitted the 1000 mb (near the Earth's surface) pressure level, showing only the mid-tropospheric (700 mb) on up to the stratosphere (300 mb).  Most of the atmosphere's moisture is found in the lower troposphere, not the upper.  Let's correct Gregory's errors.


Those trends Gregory found are far less impressive when using just homogenized data.  Is it any wonder he wanted to include data that included false trends?  Relative humidity has noticeably decreased for the top of the troposphere (300 mb), which is where the atmosphere is cooling.  Other than that, relative humidity has stayed pretty much constant since 1973.  That decline visible at the 850 mb level?  It's on the order of -0.88% per decade, which is hardly worth noting.  There's also evidence that NCEP Reanalysis I underestimates the humidity of the troposphere (Dessler and Davis 2010), resulting in false negative trends.  In short, the assumption that relative humidity will remain constant is a good assumption, no matter how badly Taylor and Gregory wish it wasn't.

Taylor's second "factual" statement claiming that specific (absolute) humidity hasn't risen rapidly enough is based on his demonstrably false statement about relative humidity and is backed by the same questionable 2013 report by Ken Gregory.  What is questionable about Gregory's report?  Beyond the highly questionable relative humidity graph, the NVAP data Gregory used is somewhat doubtful.  NASA, which gathered the data, has this to say about the NVAP data:

Key Strengths:

  • Direct observations of water vapor

Key Limitations:

  • Changes in algorithms result in time series (almost) unuseable for climate studies

That statement about limitations is found right on the main page of the NVAP website.  You won't find any mention of that key limitation anywhere in Gregory's report.  I highly doubt Taylor bothered to check that such a limitation exists.  Yet its existence eviscerates the entire premise behind Gregory's report.

Gregory's report is also questionable in light of the peer-reviewed literature.  Dessler and Davis (2010) found that reanalysis programs that included satellite data as well as radiosonde data generally found that specific humidity had increased in the upper troposphere.

Figure 3 from Dessler and Davis (2010) demonstrating how four modern reanalyses and the AIRS satellite measurement show that specific humidity in the upper tropospheric has increased, in contrast to NCEP Reanalysis I (solid black line) and in contrast to Gregory's claim otherwise. 
Chung et al. (2014) combined radiosonde and satellite measurements to show that specific humidity in the upper troposphere had increased since 1979 (the start of the satellite record) and that the increase could not be explained by natural variation.

Bottom line?  Gregory's opening sentence which stated his conclusion ("An analysis of NASA satellite data shows that water vapor, the most important greenhouse gas, has declined in the upper atmosphere causing a cooling effect that is 16 times greater than the warming effect from man-made greenhouse gas emissions during the period 1990 to 2001.") is utterly wrong.  Even if there had been a drop, it wouldn't have the size of the effect that Gregory claimed.  Solomon et al. (2010) showed that a drop in stratospheric water vapor, which is even higher in the atmosphere than the top of the troposphere, slowed the rate of temperature rise by only 25% compared to that expected from greenhouse gases alone.  The size of the effect is nowhere near the "16 times greater" that Gregory claimed, certainly not enough to stop global warming.

In short, Taylor based his main premise on a single non-peer-reviewed report from an oil-supported astroturf group, a report that is factually false.  The remainder of his article is just a repetition of science denier canards, many of which I've covered in this blog before, and most of which merely demonstrate that Taylor has an abysmal grasp of climate science.

Monday, September 8, 2014

R code for my Seasonal Trends graph

I had a request for the code I used to generate the graphs in my Seasonal Trends post.


While it looks complex, the R code for it is very simple IF you have the data ready.    I'm assuming that you already have the temperature dataset you want as an R object (I have several datasets in an object I simply call "monthly": GISS, Berkeley Earth, Cowtan-Way, HadCRUT4, UAH, and RSS, along with the year/decimal month Time, Year, and numeric Month).  The code I used to generate the graph is as follows:
#Create separate datasets for each season

monthly.79<-subset(monthly, Time>=1978.92 & Time<2013.91)
DJF<-subset(monthly.79, Month=="12" | Month =="1" | Month=="2")
DJF$Year_2 <- numeric (length (DJF$Year))
for (i in 1:length (DJF$Year) ) {
        if ( DJF$Month [i] == 12) {
                DJF$Year_2[i] <-   DJF$Year [i] + 1
        }
        else {
                DJF$Year_2[i] <-   DJF$Year [i]
        }
}
MAM<-subset(monthly.79, Month=="3" | Month =="4" | Month=="5")
JJA<-subset(monthly.79, Month=="6" | Month =="7" | Month=="8")
SON<-subset(monthly.79, Month=="9" | Month=="10" | Month=="11")

#Calculate the seasonal average for each year

DJF<-aggregate(DJF$BEST, by=list(DJF$Year_2), FUN=mean)
MAM<-aggregate(MAM$BEST, by=list(MAM$Year), FUN=mean)
JJA<-aggregate(JJA$BEST, by=list(JJA$Year), FUN=mean)
SON<-aggregate(SON$BEST, by=list(SON$Year), FUN=mean)

#Check for autoregression

library(forecast) #for the auto.arima function

auto.arima(resid(lm(x~Group.1, data=DJF)), ic=c("bic"))

auto.arima(resid(lm(x~Group.1, data=MAM)), ic=c("bic"))

auto.arima(resid(lm(x~Group.1, data=JJA)), ic=c("bic"))

auto.arima(resid(lm(x~Group.1, data=SON)), ic=c("bic"))

 #Construct the plot

plot(x~Group.1, data=DJF, type="l", col="blue", xlab="Year", ylab="Temperature anomaly (ºC)", main="Seasonal Climate Trends", ylim=c(-0.1, 0.8))
points(x~Group.1, data=MAM, type="l", col="green")
points(x~Group.1, data=JJA, type="l", col="red")
points(x~Group.1, data=SON, type="l", col="orange")

#Add the trend lines

abline(lm(x~Group.1, data=DJF), col="blue", lwd=2)
abline(lm(x~Group.1, data=MAM), col="green", lwd=2)
abline(lm(x~Group.1, data=JJA), col="red", lwd=2)
abline(lm(x~Group.1, data=SON), col="orange", lwd=2)

legend(1979, 0.8, c("DJF", "MAM", "JJA", "SON"), col=c("blue", "green", "red", "orange"), lwd=2)
#Get the slopes

summary(lm(x~Group.1, data=DJF)
summary(lm(x~Group.1, data=MAM)
summary(lm(x~Group.1, data=JJA)
summary(lm(x~Group.1, data=SON)
 That's all there was to it.  I just repeated this code, modifying only the period of the initial subset to create the second graph (Monthly.2002<-subset(monthly, Time>=2001.92 & Time<2013.91) and the related seasonal subsets.  To the person who requested my code: Hope this helps.

Monday, September 1, 2014

One hundred years ago today...

...the last passenger pigeon, a female named Martha, died in the Cincinnati Zoo.  A species that once had an estimated population size of 3 billion was destroyed in roughly 50 years by a combination of habitat loss and overhunting.  The story of that extinction is being told in numerous articles on this centenary (i.e. in Nature, Audubon Magazine, National Geographic) and at museums like the Smithsonian Institute which tell the story far better than I could here.  The Audubon Magazine article, in particular, is well worth reading as it details the history of the extinction.

So, could such an extinction happen today?  The sad answer is yes, that it not only could but that it is happening today.  We see the same market incentives to get every last remaining individual that doomed the passenger pigeon in the case of the bluefin tuna, whose Pacific population has collapsed to just 4% of what it was in the 1950s.  The protection of the little that remains has been held up by fishing interests, mainly in Japan, Mexico, the US, and South Korea.

The illegal ivory trade claimed an estimated 100,000 African elephants since 2010, which is considered to be a gross underestimate by some experts.  The last large survey of elephant populations in 2007 placed the total number at only 472,000 and 690,000 elephants remaining in Africa.  Jones and Nowak wrote that they suspected that the current population was around half of that number.  At current rates, many areas of Africa will lose their elephants.

The Western Black Rhino, a subspecies of black rhinoceros, was recently driven to extinction by habitat loss and overhunting, both for sport and to meet demand for powdered rhino horn in traditional Chinese medicine.

In the US, measures to protect the lesser prairie chicken were held up for years due to opposition from various economic interests.  This despite losing 86% of their habitat.  What finally broke the logjam was a population crash that saw a 50% reduction in population in just one calendar year (2012-2013).  The population now hovers around 17,000 birds.

These are just a few of the many examples I could cite from around the globe.  Habitat loss and overhunting still play a major role in extinctions today, just as they did 100 years ago.  And just like 100 years ago, market forces still overwhelmingly favor those who try to kill every single last individual.  Now we get to add climate change to the mix, which is predicted to have major impacts on extinction rates in coming decades.  You would hope that by now, we would have learned our lesson from the passenger pigeon.  Unfortunately, we as a society appear to be trying to prove that George Santayana was correct when he said "Those who cannot remember the past are condemned to repeat it."

Sunday, August 31, 2014

The Daily Fail: David Rose's newest cherry-pick.

David Rose, who is no stranger to cherry-picking climate data and then weaving artful tales based on those cherry-picks, is back with yet another example of his perversity.  This time, he's trumpeting a 2-year increase in Arctic sea ice as measured on a single day: August 25, 2012 vs. August 25, 2014, claiming a 43% increase based on those two very specific days.  This is misleading for multiple reasons, one of which he himself admits in small type under that large flashy graphic at the top of his article:
"These reveal that – while the long-term trend still shows a decline – last Monday, August 25, the area of the Arctic Ocean with at least 15 per cent ice cover was 5.62 million square kilometres." (emphasis added).
So, just what does that long-term trend show? This:


To generate this graph, I simply downloaded the NSIDC Arctic sea ice data and plotted August 25 sea ice extent. When data for August 25 was missing, I simply averaged the data for August 24 and August 26 to extrapolate the extent on August 25.  I've added a loess smooth and 95% confidence intervals to highlight the trend.

Is it any wonder why Rose wants to focus on the increase since 2012 while giving little more than lip service to the long-term trend?  The long-term data shows that August 25, 2014 had the 8th lowest extent on that date of the satellite record.  The long-term trend?  Still firmly negative.  So why the focus on the increase since 2012?  Simple.  It's the part of the satellite record that he could spin to fit his narrative.  It wouldn't fit his "all is well" narrative if he admitted that the Aug. 25, 2014 extent represented a loss of 1.9 million square kilometers from the same day in 1979.  It's the equivalent of a gambler claiming an increase of 43% when he wins back $17 while ignoring the fact that he lost a total of $36 before his win.

He also claims that the ice is thicker and implies that ice volume has recovered but didn't show the data on that, merely relying on the colors in the satellite image while utterly ignoring the trend in ice volume.  Examining the data shows why.


The take-home message from the volume data?  That the ice has "recovered" enough to get back up to the trend.  The last few years, volume had been declining faster than the overall trend.  Now?  It's right at the trend.  Not quite the picture of a "recovery" that Rose attempts to paint.

I'll leave the final word to Dr. Ed Hawkins, who Rose quoted near the end of his article.
"Dr Hawkins warned against reading too much into ice increase over the past two years on the grounds that 2012 was an ‘extreme low’, triggered by freak weather.

‘I’m uncomfortable with the idea of people saying the ice has bounced back,’ he said.
"
 That is a hilarious quote for Rose to include, as Rose spent his entire article trying to do just what Hawkins warned against.

Thursday, August 28, 2014

So what if CO2 was 2400 ppmv in the Mesozoic

This is a response to those who try to claim that global warming won't be so bad.  The gist of their argument is that since life thrived in the Mesozoic when CO2 was ~2400 ppmv and temperatures 8ºC warmer, climate change today isn't anything to be worried about.  Unfortunately, this argument ignores some very basic facts about biology and physics.  Here is some of what they're ignoring.

1) First, thanks to those individuals for accidentally confirming the relationship between CO2 and global temperature, as well as modern estimates of climate sensitivity.  At modern solar radiation levels and with climate sensitivity at 0.809 W/m2, the equilibrium climate model predicts that with CO2 at 2400 ppmv, global temperatures would rise by 9.3ºC above pre-industrial temperatures.  Factor in a weaker sun back in the Mesozoic and you get the 8ºC rise experienced from 2400 ppmv CO2 back then (Royer 2006).  Got to love it when those who dismiss science score an own goal and don't even realize it.

2) The species we have living on this Earth are not the same as the species that existed during the Mesozoic.  Then, the land was dominated by various species of dinosaurs, the air by pelicosaurs, and the seas by ithyosaurs, mosasaurs, and plesiosaurs.  The dominant plants for the Triassic and Jurassic was various species of gymnosperms while the Cretaceous saw the rise of the angiosperms.  But that is largely irrelevant for today's species.  Most of today's species evolved during the Pleistocene, when global average temperatures were usually 4.5ºC colder than today.  Species are highly sensitive to changes in the normal temperature regime to which they have evolved.  Even a shift of a few tenths of a degree C is enough to make species migrate toward the poles and change their phenology.  A temperature increase of 8ºC above today's levels would be catastrophic to today's species, many of which are already at the upper limits of their normal temperature range.

3) While the total amount of warming is important, the rate at which that warming occurs is even more important.  A slow rate would allow species to evolve adaptations to the change in temperatures.  Unfortunately, the current rate of temperature change is far faster than the  rate of evolutionary adaptation to changes in temperature.  Quintero and Wiens (2013) found that vertebrate species can adapt to at most 1ºC of temperature change per million years.  The current rate of temperature change over the past 30 years is 1.6ºC per century, over 10,000x faster.

I'm sure there's more that I've left out or just didn't think of while writing this.  The bottom line is that those who try to argue that increases in CO2 is no big deal are simply ignoring most of what we know about ecology, physiology, and evolution.

Tuesday, August 26, 2014

Roy Spencer and 95% of models are wrong

This is one that has been making the rounds since Spencer published it on his blog in February.  Here's the graph he created:


Take a good look.  Not only does his graph appear to show that most models are higher than both HadCRUT4 and UAH satellite temperature record but it shows that HadCRUT4 is higher than UAH as well.  That is...strange, to say the least.  IPCC AR5 (aka CMIP5) models were calibrated against 20th century temperatures (1900-1999) and have only been actually predicting temperatures since 2000.  However, Spencer's graph appears to show that their output is higher than the observed temperature records for 1983-1999—during the calibration period.  That makes no sense at all.

What is going on?  Take a look at the y-axis label.  According to the y-axis label, Spencer simply subtracted the 1979-1983 average from the observations and models to create his temperature anomalies.  Displaying temperature graphs as anomalies isn't a big deal.  You can demonstrate that yourself easily.  Get GISS data and add 14.00 (the value of their 1951-1980 baseline average—okay, 13.998 if you're picky) and graph the resultant data.  Then graph the anomalies as given and compare the two graphs.  They will be the same, with the only difference being the values displayed on the y-axis.  The problem isn't in Spencer using anomalies.  The problem is the baseline he chose.  It's way too short.

In climate science, the normal temperature baseline is a 30-year average.  Why?  That is a long enough time period to remove the effects of weather from the baseline average.  Any one year—or even a run of years—can be hotter or cooler than normal just by random chance.  Those random effects cancel each other out when averaged together over longer time periods.  Spencer knows this.  His UAH satellite data is normally baselined to a 30-year average (1981-2010).  Using just five years as a baseline means that his anomalies are subject to interference from weather, as five years is far too short to cancel out vagaries of the weather.  Let's demonstrate that.  Here's HadCRUT4 and UAH set to the same 1981-2010 baseline:


Notice how well the two observation records match for the entire period?  Check out the 1979-1983 period Spencer cherry-picked as his baseline.  Notice how UAH is always higher than HadCRUT4 for that period?  That partly explains why Spencer's graph shows HadCRUT4 as consistently higher than UAH.  It has nothing to do with actual temperatures.  It's the difference in the baselines that give the false appearance of an actual difference.  UAH just happens to have a higher baseline value than HadCRUT4 over those five years, which makes the UAH anomalies appear smaller than the HadCRUT4 anomalies.  Setting the baseline to 1979-1983 makes the graph look like this:


Compare the two graphs.  Now, it appears that HadCRUT4 anomalies are consistently warmer than the UAH anomalies, which is simply not true.  It's a false appearance created by fudging the baseline.

Curiously, while HadCRUT4 now appears to show more warming than UAH, it's not always warmer like Spencer shows in his graph.  Neither HadCRUT4 nor UAH start at exactly zero in 1983 as Spencer shows.  Furthermore, Spencer's graph doesn't show the major spike in 1998.  In fact, on Spencer's graph, UAH temperatures peak in 2013.  It's obvious that he did something beyond deliberately fudging the baseline.  The only question is what did he do?  My guess is he adjusted the observational data for ENSO, aerosols, and changes in solar radiation, similar to Foster and Rahmstorf (2011), and then readjusted the baselines until 1983 was the zero point.  While adjusting the data for those short-term effects was the right thing to do, fiddling with the baseline to give the false appearance that the IPCC models and HadCRUT4 were all warmer than UAH was not.  It reeks of deception.


What does Spencer's graph look like without Spencer's deliberately fudged baseline?


Note how well the IPCC models matched the observations prior to AD 2000?  Even after AD 2000, observations are well within the confidence interval for the models, even without adjusting for ENSO, aerosols, or changes in solar radiation.  Also note how the IPCC models are generally lower than either UAH or HadCRUT4 during the 1979-1983 period, which explains how Spencer got the graph he did.

In short, Spencer created his graph by deliberately fudging a baseline to give the false appearance that the observations were far lower than what the IPCC models were predicting.  What that graph calls into question is neither the models nor the observations but Spencer's integrity if he has to resort to deceiving his readers to maintain his increasingly untenable position contrary to the rest of the scientific world.

Addendum: Sou at HotWhopper has two excellent articles covering Spencer's deception with better graphics to display exactly what Spencer did:

http://blog.hotwhopper.com/2014/02/roy-spencers-latest-deceit-and-deception.html

http://blog.hotwhopper.com/2014/05/roy-spencer-grows-even-wearier.html

Thursday, August 21, 2014

More predictions of September Arctic sea ice extent

I published a prediction of Arctic sea ice extent on July 1 that was based on September sea ice extent from 1979 to 2013.  That model yielded a prediction that the average extent for September 2014 would be 4.135 million square kilometers.  However, that model does not take into consideration any other information we have on Arctic sea ice, such as the ice extent in previous months of the year.  It just gives the general trend of sea ice in September from year to year.  You cannot use it to predict ice extent based on current ice extent or conditions.

Given that limitation, I decided to build a regression model predicting average extent in September based on the average extents between March and August.  I quickly ran into a major problem: colinearity between months.  So instead of building one grand model, I was forced to build separate models for each predictor month.  Without further ado, here are the top three models based on R2 value:

Month
Model
R2
Ice extent in 2014 (millions of km2)
Predicted Sept. ice extent (millions of km2)
Graph
June
-13.5300 + 1.6913x
0.7522
11.09
5.23
July
-4.80933 + 1.18618x
0.8796
8.17
4.88
August
-1.69389 + 1.12965x
0.9674
6.13
5.23

I also tested models for March, April, and May but found that predictive ability decreased rather dramatically. For instance, the R2 value for the May/September regression was only 0.3878, nearly half of that for June.

So far, the predictions based on June and July ice extents are higher than the one made based on the September trend by itself. The July prediction is very close to the median of the 2014 predictions submitted to the Sea Ice Prediction Network whereas the one made using the September trend alone is trending toward the bottom.  I'll update this post with the prediction made using August once August is over.

From the ARCUS website

Update: August ice extent came in at 6.13 million km2. The predicted September extent based on August is 5.23 million km2, the same as the predicted extent based on June. The predicted extent is nearly the same as the extent in September 2013.