Sunday, August 31, 2014

The Daily Fail: David Rose's newest cherry-pick.

David Rose, who is no stranger to cherry-picking climate data and then weaving artful tales based on those cherry-picks, is back with yet another example of his perversity.  This time, he's trumpeting a 2-year increase in Arctic sea ice as measured on a single day: August 25, 2012 vs. August 25, 2014, claiming a 43% increase based on those two very specific days.  This is misleading for multiple reasons, one of which he himself admits in small type under that large flashy graphic at the top of his article:
"These reveal that – while the long-term trend still shows a decline – last Monday, August 25, the area of the Arctic Ocean with at least 15 per cent ice cover was 5.62 million square kilometres." (emphasis added).
So, just what does that long-term trend show? This:


To generate this graph, I simply downloaded the NSIDC Arctic sea ice data and plotted August 25 sea ice extent. When data for August 25 was missing, I simply averaged the data for August 24 and August 26 to extrapolate the extent on August 25.  I've added a loess smooth and 95% confidence intervals to highlight the trend.

Is it any wonder why Rose wants to focus on the increase since 2012 while giving little more than lip service to the long-term trend?  The long-term data shows that August 25, 2014 had the 8th lowest extent on that date of the satellite record.  The long-term trend?  Still firmly negative.  So why the focus on the increase since 2012?  Simple.  It's the part of the satellite record that he could spin to fit his narrative.  It wouldn't fit his "all is well" narrative if he admitted that the Aug. 25, 2014 extent represented a loss of 1.9 million square kilometers from the same day in 1979.  It's the equivalent of a gambler claiming an increase of 43% when he wins back $17 while ignoring the fact that he lost a total of $36 before his win.

He also claims that the ice is thicker and implies that ice volume has recovered but didn't show the data on that, merely relying on the colors in the satellite image while utterly ignoring the trend in ice volume.  Examining the data shows why.


The take-home message from the volume data?  That the ice has "recovered" enough to get back up to the trend.  The last few years, volume had been declining faster than the overall trend.  Now?  It's right at the trend.  Not quite the picture of a "recovery" that Rose attempts to paint.

I'll leave the final word to Dr. Ed Hawkins, who Rose quoted near the end of his article.
"Dr Hawkins warned against reading too much into ice increase over the past two years on the grounds that 2012 was an ‘extreme low’, triggered by freak weather.

‘I’m uncomfortable with the idea of people saying the ice has bounced back,’ he said.
"
 That is a hilarious quote for Rose to include, as Rose spent his entire article trying to do just what Hawkins warned against.

Thursday, August 28, 2014

So what if CO2 was 2400 ppmv in the Mesozoic

This is a response to those who try to claim that global warming won't be so bad.  The gist of their argument is that since life thrived in the Mesozoic when CO2 was ~2400 ppmv and temperatures 8ºC warmer, climate change today isn't anything to be worried about.  Unfortunately, this argument ignores some very basic facts about biology and physics.  Here is some of what they're ignoring.

1) First, thanks to those individuals for accidentally confirming the relationship between CO2 and global temperature, as well as modern estimates of climate sensitivity.  At modern solar radiation levels and with climate sensitivity at 0.809 W/m2, the equilibrium climate model predicts that with CO2 at 2400 ppmv, global temperatures would rise by 9.3ºC above pre-industrial temperatures.  Factor in a weaker sun back in the Mesozoic and you get the 8ºC rise experienced from 2400 ppmv CO2 back then (Royer 2006).  Got to love it when those who dismiss science score an own goal and don't even realize it.

2) The species we have living on this Earth are not the same as the species that existed during the Mesozoic.  Then, the land was dominated by various species of dinosaurs, the air by pelicosaurs, and the seas by ithyosaurs, mosasaurs, and plesiosaurs.  The dominant plants for the Triassic and Jurassic was various species of gymnosperms while the Cretaceous saw the rise of the angiosperms.  But that is largely irrelevant for today's species.  Most of today's species evolved during the Pleistocene, when global average temperatures were usually 4.5ºC colder than today.  Species are highly sensitive to changes in the normal temperature regime to which they have evolved.  Even a shift of a few tenths of a degree C is enough to make species migrate toward the poles and change their phenology.  A temperature increase of 8ºC above today's levels would be catastrophic to today's species, many of which are already at the upper limits of their normal temperature range.

3) While the total amount of warming is important, the rate at which that warming occurs is even more important.  A slow rate would allow species to evolve adaptations to the change in temperatures.  Unfortunately, the current rate of temperature change is far faster than the  rate of evolutionary adaptation to changes in temperature.  Quintero and Wiens (2013) found that vertebrate species can adapt to at most 1ºC of temperature change per million years.  The current rate of temperature change over the past 30 years is 1.6ºC per century, over 10,000x faster.

I'm sure there's more that I've left out or just didn't think of while writing this.  The bottom line is that those who try to argue that increases in CO2 is no big deal are simply ignoring most of what we know about ecology, physiology, and evolution.

Tuesday, August 26, 2014

Roy Spencer and 95% of models are wrong

This is one that has been making the rounds since Spencer published it on his blog in February.  Here's the graph he created:


Take a good look.  Not only does his graph appear to show that most models as higher than both HadCRUT4 and UAH satellite temperature record but it shows that HadCRUT4 is higher than UAH as well.  That is...strange, to say the least.  IPCC AR5 (aka CMIP5) models were calibrated against 20th century temperatures (1900-1999) and have only been actually predicting temperatures since 2000.  However, Spencer's graph appears to show that their output is higher than the observed temperature records for 1983-1999—during the calibration period.  That makes no sense at all.

What is going on?  Take a look at the y-axis label.  According to the y-axis label, Spencer simply subtracted the 1979-1983 average from the observations and models to create his temperature anomalies.  Displaying temperature graphs as anomalies isn't a big deal.  You can demonstrate that yourself easily.  Get GISS data and add 14.00 (the value of their 1951-1980 baseline average—okay, 13.998 if you're picky) and graph the resultant data.  Then graph the anomalies as given and compare the two graphs.  They will be the same, with the only difference being the values displayed on the y-axis.  The problem isn't in Spencer using anomalies.  The problem is the baseline he chose.  It's way too short.

In climate science, the normal temperature baseline is a 30-year average.  Why?  That is a long enough time period to remove the effects of weather from the baseline average.  Any one year—or even a run of years—can be hotter or cooler than normal just by random chance.  Those random effects cancel each other out when averaged together over longer time periods.  Spencer knows this.  His UAH satellite data is normally baselined to a 30-year average (1981-2010).  Using just five years as a baseline means that his anomalies are subject to interference from weather, as five years is far too short to cancel out vagaries of the weather.  Let's demonstrate that.  Here's HadCRUT4 and UAH set to the same 1981-2010 baseline:


Notice how well the two observation records match for the entire period?  Check out the 1979-1983 period Spencer cherry-picked as his baseline.  Notice how UAH is always higher than HadCRUT4 for that period?  That partly explains why Spencer's graph shows HadCRUT4 as consistently higher than UAH.  It has nothing to do with actual temperatures.  It's the difference in the baselines that give the false appearance of an actual difference.  UAH just happens to have a higher baseline value than HadCRUT4 over those five years, which makes the UAH anomalies appear smaller than the HadCRUT4 anomalies.  Setting the baseline to 1979-1983 makes the graph look like this:


Compare the two graphs.  Now, it appears that HadCRUT4 anomalies are consistently warmer than the UAH anomalies, which is simply not true.  It's a false appearance created by fudging the baseline.

Curiously, while HadCRUT4 now appears to show more warming than UAH, it's not always warmer like Spencer shows in his graph.  Neither HadCRUT4 nor UAH start at exactly zero in 1983 as Spencer shows.  Furthermore, Spencer's graph doesn't show the major spike in 1998.  In fact, on Spencer's graph, UAH temperatures peak in 2013.  It's obvious that he did something beyond deliberately fudging the baseline.  The only question is what did he do?  My guess is he adjusted the observational data for ENSO, aerosols, and changes in solar radiation, similar to Foster and Rahmstorf (2011), and then readjusted the baselines until 1983 was the zero point.  While adjusting the data for those short-term effects was the right thing to do, fiddling with the baseline to give the false appearance that the IPCC models and HadCRUT4 were all warmer than UAH was not.  It reeks of deception.


What does Spencer's graph look like without Spencer's deliberately fudged baseline?


Note how well the IPCC models matched the observations prior to AD 2000?  Even after AD 2000, observations are well within the confidence interval for the models, even without adjusting for ENSO, aerosols, or changes in solar radiation.  Also note how the IPCC models are generally lower than either UAH or HadCRUT4 during the 1979-1983 period, which explains how Spencer got the graph he did.

In short, Spencer created his graph by deliberately fudging a baseline to give the false appearance that the observations were far lower than what the IPCC models were predicting.  What that graph calls into question is neither the models nor the observations but Spencer's integrity if he has to resort to deceiving his readers to maintain his increasingly untenable position contrary to the rest of the scientific world.

Addendum: Sou at HotWhopper has two excellent articles covering Spencer's deception with better graphics to display exactly what Spencer did:

http://blog.hotwhopper.com/2014/02/roy-spencers-latest-deceit-and-deception.html

http://blog.hotwhopper.com/2014/05/roy-spencer-grows-even-wearier.html

Thursday, August 21, 2014

More predictions of September Arctic sea ice extent

I published a prediction of Arctic sea ice extent on July 1 that was based on September sea ice extent from 1979 to 2013.  That model yielded a prediction that the average extent for September 2014 would be 4.135 million square kilometers.  However, that model does not take into consideration any other information we have on Arctic sea ice, such as the ice extent in previous months of the year.  It just gives the general trend of sea ice in September from year to year.  You cannot use it to predict ice extent based on current ice extent or conditions.

Given that limitation, I decided to build a regression model predicting average extent in September based on the average extents between March and August.  I quickly ran into a major problem: colinearity between months.  So instead of building one grand model, I was forced to build separate models for each predictor month.  Without further ado, here are the top three models based on R2 value:

Month
Model
R2
Ice extent in 2014 (millions of km2)
Predicted Sept. ice extent (millions of km2)
Graph
June
-13.5300 + 1.6913x
0.7522
11.09
5.23
July
-4.80933 + 1.18618x
0.8796
8.17
4.88
August
-1.69389 + 1.12965x
0.9674
?
?

I also tested models for March, April, and May but found that predictive ability decreased rather dramatically. For instance, the R2 value for the May/September regression was only 0.3878, nearly half of that for June.

So far, the predictions based on June and July ice extents are higher than the one made based on the September trend by itself. The July prediction is very close to the median of the 2014 predictions submitted to the Sea Ice Prediction Network whereas the one made using the September trend alone is trending toward the bottom.  I'll update this post with the prediction made using August once August is over.

From the ARCUS website

Wednesday, August 20, 2014

Where does 2014 rank in the hottest years on record so far?

While we're closing in on September, several of the global temperature datasets are still stuck on June and haven't released the July data yet.  Just for fun, let's compare how the first six months of 2014 stack up to previous years.  To answer the question in the title, I  averaged the first six months of each year in the Cowtan-Way global temperature dataset.


The first six months of 2014 were the third-hottest such period since 1850 with an average anomaly of 0.588ºC, just edging 1998 (0.585ºC).  Only 2010 (0.697ºC) and 2007 (0.626ºC) were hotter.

Top 10 hottest Jan-June periods:

2010: 0.697ºC
2007: 0.626ºC
2014: 0.588ºC
1998: 0.585ºC
2002: 0.573ºC
2005: 0.571ºC
2006: 0.497ºC
2013: 0.494ºC
2009: 0.493ºC
2003: 0.484ºC

It should be interesting to see how the rest of the year plays out.

Sunday, August 10, 2014

IPCC models versus actual trends

This is an extension of my previous post comparing IPCC models and actual temperature data.  I had a request to directly compare the observed rates of temperature rise with the predicted rise from the average of the AR5 models.  First, my methods:  I averaged all 81 IPCC AR5 8.5 models.  I then calculated the rate of change for the average of the models as well as Berkeley Earth's Land + Ocean dataset, the new Cowtan-Way coverage-corrected version of HadCRUT4, and GISS.  All rates were calculated after compensating for autocorrelation.  With that out of the way, here's the rates of temperature rise for the last 30 full years (1984-2013) in three surface datasets that cover the entire globe versus the average of all 81 IPCC AR5 8.5 scenarios:

Rate of temperature increase between 1984 and 2013 for IPCC AR5 models (8.5 scenario) versus Berkeley Earth Land + Ocean data, Cowtan-Way

While the average rate from the IPCC AR5 8.5 models is higher than the observations, the observations are well within the 95% confidence interval.  The difference in the rates is not statistically significant.

The true test of the AR5 models, however, is their accuracy since January 2000.  That's the start of the actual predictions.  That also poses a problem for determining whether or not the models are accurate.  It's only been 14.58 years since January 2000.  You need over 17 years to reliably detect actual climate trends in temperature data (Santer et al. 2011), making comparisons since January 2000 largely meaningless as the following graph shows.


Notice the massive 95% confidence intervals around the observed trends?  The observed trends could be anything from less than -0.01ºC/year to over 0.03ºC/year.  That range easily contains the average trend calculated from the IPCC AR5 models.  So while it appears that the predicted average trend is far higher than the observed trends, the reality is that there's no statistical difference between the predicted and observed trends.  And while I'm at it, there's also no evidence that global warming has stopped or even really slowed—just look at that possible range around the observed trends.

The reality is that right now, there is no statistical basis to determine if the IPCC AR5 models are wrong, even without accounting for random climate events like ENSO, volcanic eruptions, or changes in solar output. Published research shows that just accounting for ENSO alone explains much of the discrepancy between the predicted trends and the observed trends (i.e. Kosaka and Xie 2013, Risbey et al. 2014).  The upshot of it all is that those proclaiming that the models are wrong are either greatly exaggerating or simply ignoring the evidence in favor of a simplistic and wrong view of how the world works.

Monday, July 21, 2014

Risbey et al. 2014

It seems the canard about how IPCC models are inaccurate just won't go away.  I've covered it before on this blog.  The newest incarnation of that canard revolves around a new paper by Risbey et al. (2014).  It seems that many just don't understand what Risbey et al. did and they definitely don't understand the results of that paper.

What Risbey et al. did was fairly simple.  They used a moving 15-year window and evaluated multiple climate models based on each model's ability to match the actual El Niño/Southern Oscillation (ENSO) state over that time period.  They took the models that best matched the actual ENSO state over that time period—regardless of how accurate anything else was about the selected models—calculated the predicted temperature trends from each selected model, and compared those predicted trends to the actual temperature trend over that same period.  Then they shifted the window and repeated the exercise.  One of the results was a graph (Figure 2) that looks similar to one that I created back in January to find the last time the Earth had a 15-year cooling trend

Figure 2 from Risbey et al. 2014 showing actual 15-year temperature trends versus raw IPCC models 15-year trends

What their main results show is the reason their paper is "controversial" in the denial bubble.  Risbey et al. showed that the mismatch between climate models and atmospheric temperatures in recent years is due to mainly to a mismatch between the predicted states of ENSO and the actual state of ENSO.  Not because the models are inherently wrong.  Not because scientists don't understand the climate system.  Not because the physics are wrong or any other standard denier canard about climate models or climate scientists.  It's because computer models are unable to predict, years in advance, exactly what one chaotic phenomenon will do.  If there's a mismatch between that predicted input and the actual input, then the model temperature predictions will appear to be off.  Once they used the match to ENSO as the selection criteria, the predicted temperature trends were very similar to the actual trends, as their Figure 3 shows.

Figure 3 from Risbey et al. 2014 showing the match between model predictions and actual 15-year trends.

 To those who keep up with the scientific literature, this will not be a surprise.  Others (i.e.  Kosaka and Xie 2013) have shown before that the IPCC models accurately predict temperatures since 2000 if they are given actual ENSO values rather than predicted values.  Risbey et al. just adds to the evidence that the climate models are accurate, that the main problem lies in predicting ENSO values rather than any inherent problem in the climate models.  That should give us pause, as the effects of a chaotic oscillation will cancel out over time, leaving the trend unchanged.  And we know where that trend is headed, regardless of the short-term effects of ENSO.

So, what to make of the kerfuffle roiling the denialsphere over Risbey et al.?  Much of it is simple ignorance—they don't understand what Risbey et al. have done and what their paper shows.  The rest is simply willful ignorance from those who should know better.