Monday, October 20, 2014

Global warming: Carbon dioxide vs. Natural cycles

The recent paper by Johnstone and Mantua (2014) has certainly made the rounds in conservative circles.  It's popped up several times on my Facebook feed as various friends and acquaintances share articles about it.  Unfortunately, most of those articles get it wrong, usually twisting Johnstone and Mantua's findings to imply that 80% of ALL global warming is natural.  As I explained in my last post, that is a blatant misinterpretation of their paper, which only applies to the northeastern Pacific and coastal regions of the US Pacific Northwest.  Globally, natural cycles do not explain the trend in global temperatures.  How can I say that?  Do the statistics.

I used multiple regression to assess the fit between global temperatures since March 1958 (using coverage-corrected HadCRUT4 from Cowtan and Way 2014) and carbon dioxide levels, El Niño/Southern Oscillation (ENSO), the Pacific Decadal Oscillation (PDO), and the Atlantic Multidecadal Oscillation (AMO).  I converted carbon dioxide levels to radiative forcing using the formula from Myhre et al. (1998), then used a 12-month moving average on all data to eliminate seasonal cycles and reduce random noise.  I'm not interested in noise or seasonal cycles—I want only the trend.  I then used the cross-correlation function to calculate the lag between each regressor variable and global temperatures.

Figure 1.  Cross-correlation results between global temperature, radiative forcing, and three natural oceanic cycles.
Table 1.  Best lag between each regressor and global temperature
Regressor
Lag (months)
Correlation
Radiative Forcing
0
0.917
ENSO
4
0.251
PDO
49
0.263
AMO
-2
0.761

That last one is not a typo—changes in global temperatures preceded changes in the AMO by 2 months rather than lagged behind as would be expected if AMO controlled global temperature.  That nasty little fact eliminates the AMO as a possible driver of global warming, relegating it to a positive feedback role at best.  The following graph shows the relationship between global temperature and the four regressors.

Figure 2.  Cross-correlation diagrams showing the relationship between global temperatures and radiative forcing and three natural oceanic cycles.
Only radiative forcing has a clear linear relationship with global temperatures since 1958 (R2 = 0.8401).  The natural cycles have far more complex relationships with the linear trend explaining very little of the variation in the data (ENSO: R2 = 0.06174, PDO: R2 = 0.07546, AMO: R2 = 0.004143).  For those who like percentages, radiative forcing explains 84.01% of the temperature trend since 1958, the ENSO explains 6.17%, the PDO explains 7.65%, and the AMO 0.41%.

I then used stepwise regression to build a multiple regression model between global temperatures, radiative forcing, ENSO, and PDO (omitting AMO as it lags behind changes in global temperature).  The result?

Call:
lm(formula = Temp ~ RF + ENSO.lag, data = variables)

Residuals:
      Min        1Q    Median        3Q       Max
-0.223213 -0.048495  0.001585  0.060782  0.210634

Coefficients:
                    Estimate      Std. Error      t value     Pr(>|t|)  
(Intercept)   -0.607420     0.011882     -51.12      <2e-16 ***
RF                0.650090     0.009398      69.17      <2e-16 ***
ENSO.lag     0.052683     0.004087      12.89      <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.08099 on 613 degrees of freedom
Multiple R-squared: 0.8931,    Adjusted R-squared: 0.8927
F-statistic:  2560 on 2 and 613 DF,  p-value: < 2.2e-16

That's not too shabby taken at face value.  The model explains 89.27% of the variation in the global temperature data.  That was a statistically significant increase over the model with just radiative forcing (84.01%).   Models with the Pacific Decadal Oscillation included did not improve the fit over that of just radiative forcing and the El Niño/Southern Oscillation (p = 0.1318).  Using the model to predict the 12-month moving average global temperature showed a good fit compared to the observed (R2 = 0.8929).  The main areas of disagreement between the predicted and observed occurred mainly during the major volcanic eruptions (El Chichon, Mt. Pinatubo).  Not too shabby for such a simple model.

Figure 3.  Observed versus predicted global temperature trends.
Examining the diagnostic plots for the winning model shows no major departures from the expected.  The residual plots are somewhat odd, but that is likely a consequence of using a 12-month moving average to smooth the original data.


So, what does this exercise show?  You may be able to show that natural variation explains temperature trends over a region or over very short time periods—but the temperature trend for the entire globe over several decades is explained by the increase in radiative forcing, with only minor input from natural variation.  Add in the strong empirical evidence directly showing that the increase in greenhouse gases (especially carbon dioxide) is warming the planet and the case becomes overwhelming.

Monday, October 13, 2014

Temperature trends and natural variation in the Pacific Northwest

A recent study by Johnstone and Mantua (2014) found a high correlation (r = 0.78) between sea surface temperatures since 1900 and changes in atmospheric pressure over the Northeastern Pacific, claiming that 80% of the variance in sea surface temperatures in the Northeastern Pacific was explained by changes in the North Pacific high.

The key finding from Johnstone and Mantua (2014).  The red line is observed sea surface temperatures and the blue line is predicted sea surface temperatures based on their statistical model.  The correlation coefficient is the correlation between the observed and predicted temperature time series.
Not surprisingly, the climate denialsphere trumpeted the results (e.g. WUWT, CATO, Breitbart.com, and various blogs).  Unfortunately, most of those "sources" appear to have gotten their information mostly from newspaper articles (the LA Times article was cited frequently), with a few references to the abstract of the study.

Johnstone and Mantua found that temperatures in the Northeastern Pacific and coastal regions of the US Pacific Northwest have risen since 1900 due to a weakening of the North Pacific high and resultant changes in surface winds and ocean currents (see also summaries in the NY Times and Climate Central).  They also noted that such changes were not predicted by current climate models.  Their conclusions were that, for the Northeastern Pacific and coastal regions of the Pacific Northwest, changes in the North Pacific high and resultant changes in winds and currents accounted for 80% of the temperature rise since 1900 and that such changes were not explained by global warming.

Johnstone and Mantua were careful to note in interviews (i.e. Wines 2014) that their conclusions only apply to the coastal regions and the ocean, not areas further inland, finding that the relationship between between the North Pacific high and temperature decreased the further inland they went.  Indeed, a study published earlier this year on those same inland areas found that the rise in greenhouse gases, not changes in natural factors, was the dominant factor in rising temperature in that area (Abatzoglou et al. 2014).  Johnstone and Mantua also emphasized that the relationship they found was unique to the Northeastern Pacific region and was unlikely to apply to other regions, as the Northeastern Pacific had several unique characteristics that combined to make wind the dominant factor in that region.

Critics have noted three main weaknesses in Johnstone and Mantua's study.  The first is that their results depend solely on a correlation between the North Pacific high and surface temperatures, without any concurrent work done to actually show that changes in the North Pacific high can cause the observed changes in surface temperatures.  As the mantra states, correlation does not imply causation.

The second is that they have no evidence showing that the changes in the Northeastern Pacific winds are natural.  Johnstone and Mantua based that conclusion on the fact that none of the global climate models they examined showed any similar changes to the Northest Pacific winds.  However, as they themselves admit, current climate models do not show such regional changes very well.  So they are basing their conclusion that the change is natural on an absence of evidence in current models while also noting that current models do not have the ability to show the evidence they seek.

The third is that their results depend on the sea level pressure data set they picked to represent the North Pacific high.  As Abatzoglou et al. (2014) noted in a comment on Johnstone and Mantua's paper, there are several data sets available with highly divergent results prior to 1940.  Johnstone and Mantua picked the one that showed the highest correlation with sea surface temperatures whereas different data sets would not show any such relationship.

I have one nit to add of my own.  Johnstone and Mantua  claim that they showed that up to 80% of the temperature rise in the Northeastern Pacific is natural.  That appears to be based on the correlation coefficient (r = 0.78) that they found between observed and predicted sea surface temperatures.  There are several problems with that conclusion.  The correlation coefficient does not show how much of the variation in temperatures was explained by the natural variables.  It doesn't even show how much of the variation in observed sea surface temperatures is explained by predicted sea surface temperatures.  To get that, you must calculate the R2 value.  In the case of Johnstone and Mantua's study, the R2 value is 0.61, meaning that only 61% of the variation in observed temperatures could be explained by the predicted temperatures, not the 80% claimed.

A correlation with an R2 of 61% is still exceptional.  However, that degree of correlation between observed and predicted temperatures should not surprise anyone.  Johnstone and Mantua used observed sea surface temperatures to construct their statistical model.  We should expect that there would be a high correlation between observed and predicted sea surface temperatures.  That does not mean that 61% of the temperature rise is due to changes in the North Pacific high, just that there is a high correlation between observed and predicted sea surface temperatures.  To get at the relationship between sea surface temperatures and the North Pacific high, you have to go to the statistical model itself.  When I downloaded their data and attempted to replicate their statistical model (lm(formula = SSTarc~lagged SLP1), I found an R2 value of 0.32 for the model itself, still highly statistically significant (p ≤ 2.2 x 10-16) but nowhere near the claimed 80% of variance.

So, what do I make the study?  An interesting result—IF it holds up to scrutiny—but it does not say anything about whether or not the rise in Northeastern Pacific sea surface temperatures is ultimately due to global warming and most certainly does not disprove anything of what we understand about global warming.

Friday, October 3, 2014

Seeing how well predictions for September Arctic sea ice did in 2014

In August, I published a post listing predictions of what the average September sea ice extent would be in 2014.  Since September 2014 is now past, we can go back and see how those predictions panned out.  First, here are the predictions again:

Month
Model
R2
Ice extent in 2014 (millions of km2)
Predicted Sept. ice extent (millions of km2)
Graph
June
-13.5300 + 1.6913x
0.7522
11.09
5.23
July
-4.80933 + 1.18618x
0.8796
8.17
4.88
August
-1.69389 + 1.12965x
0.9674
6.13
5.23

The average extent for September 2014 was 5.21 million km2, very close to the predictions based on June and August, and higher than the one based on July.  The extent also came in higher than most of the Sea Ice Prediction Network predictions logged in June.. 


As for why the sea ice survived in better shape than many predicted, Neven had this to say on his blog:
 "The 2014 melting season is about to end. The end result is very similar to that of last year, despite large differences between both melting seasons. Last year the Arctic was dominated by persistent cyclones, keeping things cold and cloudy. This year there were more bouts of sunny weather, but it seems little heat was imported from lower latitudes. A lack of strong pressure gradients also caused relatively little movement and export, clearly to be seen on the Atlantic side of the Arctic, where in the last couple of years (even last year, see here) an onslaught had taken place.
At the same time, on the Pacific side of the Arctic, the ice that had been strengthened due to last year's rebound, managed to hold out through much of the melting season, although the late momentum seems to be sustaining the melting season now (together with weather, of course). Large areas of very thin ice and milky wisps in the Beaufort, Chukchi and Eastern Siberian Seas are disappearing, keeping many a trend line dropping on sea ice graphs.
But most of this weak ice will be saved by the bell, like we saw in 2010 and 2011, and the 2014 melting season will basically end up at the same level as last year and 2009. If not today, then later this week."
Adding last month to the data since October 1978 gives us the following overall trend revealed via 12-month moving averages to remove the seasonal cycle:


While the ice is currently above the overall trend, it's not enough to alter that trend.  Any talk of a "rebound" at this point is premature.  Note that after each new record low, there are several years following where ice levels are higher than the record low—and that overall ice levels continue to decline even during those "rebounds."

Even looking at just September ice extent reveals that the overall decline continues despite the higher-than-the-record-low ice of the past two years.


Given the continued negative trend in ice volume, it's highly likely that any uptick is temporary and we'll start seeing declining September ice extents again, as has happened multiple times since 1979.  All the recent uptick did was just bring ice volume levels back to the regression line without changing the slope of the line.



The take-home message?  Despite the "ice is recovering" messages of various blogs (e.g. WUWT) and various commentators, Arctic sea ice has not recovered.  The relative uptick of the past two years is in line with previous behavior from the Arctic ice extent and does not alter the overall trend in the slightest.

Monday, September 22, 2014

Trend since 1998—significant??

I had a question sent to me about the trend since 1998.  As many of you know, my last post included an analysis which showed that the linear regression trend since 1998 was statistically significant.

Trends versus start year.  Error bars are the 95% confidence intervals.
My questioner asked if I had accounted for autocorrelation in my analysis.  The short answer is "No, I did not."  The reason?  According to my analysis, it wasn't necessary.

Here are my methods and R code.

#Get coverage-corrected HadCRUT4 data and rename the first two columns
CW<-read.table("http://www-users.york.ac.uk/~kdc3/papers/coverage2013/had4_krig_annual_v2_0_0.txt", header=F)
names(CW)[1]<-"Year"
names(CW)[2]<-"Temp"

#Analysis for autocorrelation—I check manually as well but so far the auto.arima function has performed admirably.
library(forecast)
auto.arima(resid(lm(Temp~Year, data=CW, subset=Year>=1998)), ic=c("bic"))

The surprising result?
Series: resid(lm(Temp ~ Year, data = CW, subset = Year >= 1998))
ARIMA(0,0,0) with zero mean    

sigma^2 estimated as 0.005996:  log likelihood=18.23
AIC=-34.46   AICc=-34.18   BIC=-33.69
 I was expecting something on the order of ARIMA(1,0,1), which is the autocorrelation model for the monthly averages.  Taking the yearly average rather than the monthly average effectively removed autocorrelation from the temperature data, allowing the use of a white-noise regression model.

trend.98<-lm(Temp~Year, data=CW, subset=Year>=1998)
summary(trend.98)
Call:
lm(formula = Temp ~ Year, data = CW, subset = Year >= 1998)

Residuals:
     Min       1Q           Median        3Q          Max
-0.14007  -0.05058   0.01590    0.05696    0.11085

Coefficients:
                    Estimate       Std. Error    t value    Pr(>|t|) 
(Intercept)   -19.405126   9.003395    -2.155     0.0490 *
Year             0.009922      0.004489     2.210     0.0443 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.08278 on 14 degrees of freedom
Multiple R-squared: 0.2587,    Adjusted R-squared: 0.2057
F-statistic: 4.885 on 1 and 14 DF,  p-value: 0.04425
The other surprise?  That the trend since 1998 was significant even with a white-noise model.  Sixteen data points is not normally enough to reach statistical significance unless a trend is very strong.

Temperature trend since 1998

Sunday, September 21, 2014

The "no warming" claim rises from the dead yet again.

Like a movie vampire, this one keeps coming back no matter how many stakes are driven through its heart.  I've covered this one (here, here, and here).  Bluntly: There is absolutely no evidence that global warming has stopped.  For global warming to stop, the Earth's energy balance must be either zero or negative.  The most recent estimates for the energy imbalance are generally between +0.5 W/m2 and +1.0 W/m2.  The only way the Earth is not going to warm while it is gaining energy is if the laws of thermodynamics magically do not apply.  If the Earth is gaining energy, some part of it, somewhere, must be getting warmer.  The heat must go into either melting ice, warming the oceans, warming the land, or warming the atmosphere (or some combination thereof).

Ah, but what about the atmosphere?  Has warming in the atmosphere stopped?  Again, a blunt "No."  Almost every claim of "no warming" comes from someone trying to start a linear trend line at 1998—an outlier year which saw the strongest El Niño event on record.  How do we know 1998 was an outlier?  Simple statistics.

First, I took coverage-corrected HadCRUT4 monthly temperature data between January 1970 and December 2013 and calculated the yearly average.  I then fitted a loess regression to that data.  The result?





I then plotted the residuals for that plot.  For those who are not statisticians, residuals are calculated by subtracting the predicted value based on the trend from each data point.  Residuals show how far each data point deviates from the trend, with a point right on the trend coming in with a residual of zero.  Taking the residuals also removes the trend from the data, which is handy for analyses for cycles and the like that occur around the trend.


Note that bright red point?  That's 1998.  Notice that it deviates farther from the trend than any of the points after it?  That's important as it essentially tells us that if we start a linear trend from 1998, that single data point will influence the trend more than any other year from 1999 to 2013.  Now I can hear someone saying, "But that's with a fancy-schmancy nonlinear regression.  What does plain old linear regression show?"






The residual plot reveals the following:


It doesn't really matter which regression technique you use.  1998 was an outlier by any definition.  How does starting a linear regression near 1998 affect the trend?  Quite a bit.  First, as the start point gets closer to 1998, the beginning of the trend line gets "pulled up" toward the 1998 outlier, thereby decreasing the apparent slope of the line.  Second, you have fewer and fewer data points left in the analysis to compensate for that upward pull, making the 1998 outlier more influential on the trend as the dataset gets smaller.  Third, your analysis loses power as the number of data points decrease and becomes less likely to show that a trend is statistically significant, even when trends actually exist.  You can easily see the affect of starting closer and closer to the 1998 outlier in the graph below:


If I plot the calculated trends versus the year each trend starts, I get the following:

Trends versus start year.  Error bars are the 95% confidence intervals.
First, the calculated trend gets lower as one gets closer to 1998 (closer to the outlier) and the confidence intervals around the trend get larger as less data is used in each calculated trend.  Second, all the confidence intervals overlap—there is no point as yet where one can safely say that the trend has actually changed.  Third—and somewhat surprising—even the warming trend since 1998 is statistically significant.  Guess deniers will have to find another start point for their "no warming since..." claims.

In short, all a claim of "no warming" shows is that the person making it either a) doesn't understand how global warming works and/or b) doesn't understand statistics.  For many such individuals, I'd guess that the real reason is both a and b.

Tuesday, September 16, 2014

WUWT and how NOT to test the relationship between CO2 and temperature

WUWT published a piece by Danle Wolfe which purports to measure the correlation between CO2 and global temperature.  As you can probably predict, Wolfe's conclusion is that there is no relationship.
"Focusing on the most recent hiatus below, both visually and in a 1st order linear regression analysis there clearly is effectively zero correlation between CO2 levels and global mean temperature."
 Unfortunately for Wolfe, all he's produced is a fine example of mathturbation as well as an example of forming a conclusion first then warping the evidence to fit.

What Wolfe did was cross-correlate GISS land temperature data and Mauna Loa CO2 records, with two vertical lines dividing the plot into three sections.  The first section is marked "~18 years", the middle is marked "~21 years", and the last section is marked "~17 years".

Figure 1.  Danle Wolfe's plot from WUWT
Why land temperatures rather than land + ocean temperatures?  We don't know as he failed to justify his choice.  There's one other curiosity about his plot.  We know his plot starts in 1959 as he gave that information.  That would make the first section from 1959 to 1977, the middle section from 1977 to 1998, and the last from 1997 to 2014, which means there's an overlap of 1 year between his middle and last sections.  The problem?  The maximum CO2 levels in 1997 (367 ppmv) does not match the vertical line on his graph.


Figure 2.  Temperatures vs CO2 with loess trend line.
His second line looks to be around 372, a level first crossed in 2001, not 1997.  That makes his last section at most 13 years long rather than the 17 years he claimed.  Furthermore, a loess regression reveals that his lines do not divide the graph into "no correlation" and "correlation" sections as he implied.  His "no correlation" sections are nowhere near as long as he claimed them to be.

The next deception in his graph?  He failed to remove the annual cycle from both the temperature record and the CO2 record before cross-correlating them.

Figure 3.  Seasonal cycles in both CO2 records and GISS temperatures.
Time series decomposition shows that both GISS temperatures and CO2 records have 12-month cycles—and also shows that the cycles are out-of-phase.  This makes sense as the CO2 annual cycle is tied in with the Northern Hemisphere growing season and therefore only indirectly tied to global average temperatures.  Accordingly, the cycles must be removed to get the true relationship.  Just compare the cross-correlation graph without removing the annual cycles with one with the annual cycles removed via a 12-month moving average:

Figure 4.  Scatterplots of CO2 versus temperatures, both with and without seasonal cycles removed.
Just removing the annual cycles via a 12-month moving average removed much of the noise Wolfe depended upon to make it look like there was no correlation.  Even when he tried a moving average to remove the cycle, he failed.  Simply put, a 10-month moving average does not eliminate a 12-month cycle.  You can see that in his graph, especially the CO2 line.  If you want to remove an annual cycle, you must use a 12-month moving average, not a 10-month moving average.

Figure 5.  Ten- versus twelve-month moving averages.  Note that the seasonal cycle is still apparent in the 10-month moving average whereas it is fully removed in the 12-month moving average.
Last, Wolfe failed to account for ENSO, aerosols, solar output, or any of the other non-CO2-related influences on global temperature.  His viewpoint that CO2 must be the only thing that influences global temperature is dead wrong.  There have been several studies over the past decade quantifying and then removing non-CO2 influences on global temperatures via multiple regression (e.g. Lean and Rind 2008, Foster and Rahmstorf 2011, Rahmstorf et al. 2012).  Yet it appears that Wolfe is either ignorant of that work or deliberately ignoring it.

What difference does factoring out the seasonal cycle and non-CO2 influences like El Niño/Southern Oscillation, sulfur aerosols, and solar output make on the correlation between CO2 and global temperatures?  Quite a bit.

Figure 6.  Adjusted GISS temperatures versus CO2 with annual cycles removed.
I added a loess regression line to highlight the trend.  Compare that to Wolfe's graph in figure 1 and my graph in figure 2.  Note the differences?  Once seasonal cycles and non-CO2-climate factors are removed, the correlation between global temperatures and CO2 is clear.

And just for Wolfe: Beyond fudging your second vertical line and  "forgetting" to account for seasonal cycles and climate influences like ENSO, solar output, and sulfur aerosols, you also forgot to account for autocorrelation when you did your regression since 1999.  Hint: There's a world of difference between a white noise model and an ARMA(2,1) model, especially after you take out the seasonal cycle, ENSO, aerosols, and changes in the solar cycle.  In "statistician speak," you only got the results you did because of your sloppy, invalid "analysis."
Figure 7.  Adjusted GISS vs CO2 since 1999 after seasonal cycles are removed
Generalized least squares fit by REML
  Model: GISS ~ CO2
  Data: monthly
  Subset: Time >= 1999
        AIC       BIC   logLik
  -1311.377 -1292.253 661.6886

Correlation Structure: ARMA(2,1)
 Formula: ~1
 Parameter estimate(s):
      Phi1               Phi2           Theta1
 1.3709034   -0.4152224    0.9999937

Coefficients:
                 Value             Std.Error      t-value         p-value
(Intercept) -1.9335047   0.6987245   -2.767192    0.0062
CO2          0.0058194     0.0018289    3.181950    0.0017

 Correlation:
       (Intr)
CO2  -1   

Standardized residuals:
        Min                   Q1                Med                  Q3               Max
-1.54739247   -0.69113533    -0.06509602    0.78913695    1.69592575

Residual standard error: 0.05121744
Degrees of freedom: 181 total; 179 residual

Thursday, September 11, 2014

James Taylor versus relative humidity and specific humidity

It appears that the relative humidity and specific humidity continues to trip some people up.  Yes, I'm thinking of the screed James Taylor wrote on Forbes.com on Aug. 20.  In his article, Taylor trumpets two "facts".  First, that relative humidity has declined and second, that specific humidity isn't rising as fast as global climate models predict.  Since climate models assume that relative humidity has stayed constant, Taylor then claims that models are overestimating global warming.  Unfortunately for Taylor, his "facts" don't check out.

For Taylor's relative humidity finding, he linked to a Friends of Science (FoS) graph.  Friends of Science is an oil-funded astroturf group run by Ken Gregory and is filled with various science denier myths (e.g. no warming since 1998) backed with an artfully drawn graph that show a completely flat trend line since 1996.  Edit: I found out that they used RSS data, which was panned by none other than Roy Spencer back in 2011 for showing false cooling due to increased orbital decay.  The fact that FoS tries to pass RSS off as reliable three years later says much about their brand of "science.").

Back to Gregory's relative humidity graph as Taylor cited it.  As Gregory states in his 2013 report, the data comes from the NCEP Reanalysis I, which is the only such dataset that starts in 1948.  That is important, as one of the main limitations in the NCEP Reanalysis I is the radiosonde data it is based upon was not homogenized before 1973 (Elliot and Gaffen 1991).  Homogenization removes non-climatic factors such as changes in location and instrumentation from the data set.  Without homogenization, you get false trends in the data.  In the case of relative and specific humidity data, changes in radiosonde instruments would change measured humidity.  This effectively means that means any comparison between data before 1973 and data after 1973 is invalid.  Yet that is precisely what Gregory did in his graph.  Even better?  Gregory omitted the 1000 mb (near the Earth's surface) pressure level, showing only the mid-tropospheric (700 mb) on up to the stratosphere (300 mb).  Most of the atmosphere's moisture is found in the lower troposphere, not the upper.  Let's correct Gregory's errors.


Those trends Gregory found are far less impressive when using just homogenized data.  Is it any wonder he wanted to include data that included false trends?  Relative humidity has noticeably decreased for the top of the troposphere (300 mb), which is where the atmosphere is cooling.  Other than that, relative humidity has stayed pretty much constant since 1973.  That decline visible at the 850 mb level?  It's on the order of -0.88% per decade, which is hardly worth noting.  There's also evidence that NCEP Reanalysis I underestimates the humidity of the troposphere (Dessler and Davis 2010), resulting in false negative trends.  In short, the assumption that relative humidity will remain constant is a good assumption, no matter how badly Taylor and Gregory wish it wasn't.

Taylor's second "factual" statement claiming that specific (absolute) humidity hasn't risen rapidly enough is based on his demonstrably false statement about relative humidity and is backed by the same questionable 2013 report by Ken Gregory.  What is questionable about Gregory's report?  Beyond the highly questionable relative humidity graph, the NVAP data Gregory used is somewhat doubtful.  NASA, which gathered the data, has this to say about the NVAP data:

Key Strengths:

  • Direct observations of water vapor

Key Limitations:

  • Changes in algorithms result in time series (almost) unuseable for climate studies

That statement about limitations is found right on the main page of the NVAP website.  You won't find any mention of that key limitation anywhere in Gregory's report.  I highly doubt Taylor bothered to check that such a limitation exists.  Yet its existence eviscerates the entire premise behind Gregory's report.

Gregory's report is also questionable in light of the peer-reviewed literature.  Dessler and Davis (2010) found that reanalysis programs that included satellite data as well as radiosonde data generally found that specific humidity had increased in the upper troposphere.

Figure 3 from Dessler and Davis (2010) demonstrating how four modern reanalyses and the AIRS satellite measurement show that specific humidity in the upper tropospheric has increased, in contrast to NCEP Reanalysis I (solid black line) and in contrast to Gregory's claim otherwise. 
Chung et al. (2014) combined radiosonde and satellite measurements to show that specific humidity in the upper troposphere had increased since 1979 (the start of the satellite record) and that the increase could not be explained by natural variation.

Bottom line?  Gregory's opening sentence which stated his conclusion ("An analysis of NASA satellite data shows that water vapor, the most important greenhouse gas, has declined in the upper atmosphere causing a cooling effect that is 16 times greater than the warming effect from man-made greenhouse gas emissions during the period 1990 to 2001.") is utterly wrong.  Even if there had been a drop, it wouldn't have the size of the effect that Gregory claimed.  Solomon et al. (2010) showed that a drop in stratospheric water vapor, which is even higher in the atmosphere than the top of the troposphere, slowed the rate of temperature rise by only 25% compared to that expected from greenhouse gases alone.  The size of the effect is nowhere near the "16 times greater" that Gregory claimed, certainly not enough to stop global warming.

In short, Taylor based his main premise on a single non-peer-reviewed report from an oil-supported astroturf group, a report that is factually false.  The remainder of his article is just a repetition of science denier canards, many of which I've covered in this blog before, and most of which merely demonstrate that Taylor has an abysmal grasp of climate science.