I used multiple regression to assess the fit between global temperatures since March 1958 (using coverage-corrected HadCRUT4 from Cowtan and Way 2014) and carbon dioxide levels, El Niño/Southern Oscillation (ENSO), the Pacific Decadal Oscillation (PDO), and the Atlantic Multidecadal Oscillation (AMO). I converted carbon dioxide levels to radiative forcing using the formula from Myhre et al. (1998), then used a 12-month moving average on all data to eliminate seasonal cycles and reduce random noise. I'm not interested in noise or seasonal cycles—I want only the trend. I then used the cross-correlation function to calculate the lag between each regressor variable and global temperatures.

Figure 1. Cross-correlation results between global temperature, radiative forcing, and three natural oceanic cycles. |

Table 1. Best lag between each regressor and global temperature

Regressor | Lag (months) | Correlation |

That last one is not a typo—changes in global temperatures preceded changes in the AMO by 2 months rather than lagged behind as would be expected if AMO controlled global temperature. That nasty little fact eliminates the AMO as a possible driver of global warming, relegating it to a positive feedback role at best. The following graph shows the relationship between global temperature and the four regressors.

Figure 2. Cross-correlation diagrams showing the relationship between global temperatures and radiative forcing and three natural oceanic cycles. |

^{2}= 0.8401). The natural cycles have far more complex relationships with the linear trend explaining very little of the variation in the data (ENSO: R

^{2}= 0.06174, PDO: R

^{2}= 0.07546, AMO: R

^{2}= 0.004143). For those who like percentages, radiative forcing explains 84.01% of the temperature trend since 1958, the ENSO explains 6.17%, the PDO explains 7.65%, and the AMO 0.41%.

I then used stepwise regression to build a multiple regression model between global temperatures, radiative forcing, ENSO, and PDO (omitting AMO as it lags behind changes in global temperature). The result?

Call:

lm(formula = Temp ~ RF + ENSO.lag, data = variables)

Residuals:

Min 1Q Median 3Q Max

-0.223213 -0.048495 0.001585 0.060782 0.210634

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) -0.607420 0.011882 -51.12 <2e-16 ***

RF 0.650090 0.009398 69.17 <2e-16 ***

ENSO.lag 0.052683 0.004087 12.89 <2e-16 ***

---

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.08099 on 613 degrees of freedom

Multiple R-squared: 0.8931, Adjusted R-squared:

**0.8927**

F-statistic: 2560 on 2 and 613 DF, p-value: < 2.2e-16

That's not too shabby taken at face value. The model explains 89.27% of the variation in the global temperature data. That was a statistically significant increase over the model with just radiative forcing (84.01%). Models with the Pacific Decadal Oscillation included did not improve the fit over that of just radiative forcing and the El Niño/Southern Oscillation (p = 0.1318). Using the model to predict the 12-month moving average global temperature showed a good fit compared to the observed (R

^{2}= 0.8929). The main areas of disagreement between the predicted and observed occurred mainly during the major volcanic eruptions (El Chichon, Mt. Pinatubo). Not too shabby for such a simple model.

Figure 3. Observed versus predicted global temperature trends. |

So, what does this exercise show? You may be able to show that natural variation explains temperature trends over a region or over very short time periods—but the temperature trend for the entire globe over several decades is explained by the increase in radiative forcing, with only minor input from natural variation. Add in the strong empirical evidence directly showing that the increase in greenhouse gases (especially carbon dioxide) is warming the planet and the case becomes overwhelming.