ForecastAdvisor Weather Forecast Accuracy Blog
Sunday, September 24, 2006
Accuracy of Temperature Forecasts
ForecastAdvisor provides an accuracy measurement for one- to three-day-out forecasts combined. But ForecastWatch keeps data on forecasts out to nine days, and has data on forecasts for each day out. The percentage of temperatures within three degrees is a basic measure of accuracy. It isn't the only measure, but it is the accuracy measure most commonly known to non-meteorologists. Every city, it seems, has a television meteorologist who proclaims a "three degree guarantee".
Another interesting measure is a forecast "miss". If a temperature forecast is off by ten degrees or more, it is called a miss. That means that if the actual temperature was 80 degrees, a forecast is considered a miss (or a blown forecast) if the forecast temperature was 70 degrees or below, or 90 degrees or above.
The chart below shows the "within three degrees" and "missed forecasts" for both high and low temperature forecasts from all providers for one to nine days out in 2005. Not all of the providers tracked provide forecasts out nine days (and some offer even more). Each bar at one day out represents about one and a half million forecasts. Each bar at nine days out represents about 800,000 forecasts.
At one day out, for the entire country, high temperature forecasts are within three degrees of the observed afternoon high about 68%% of the time. High temperature forecasts are blown one day out about 3%% of the time. Many of these blown forecasts one day out are because of climate extremes that the models don't handle well, or timing errors with cold or warm fronts.
You might notice that the low temperature accuracy is lower than the high temperature accuracy. There are a couple of reasons for this. One, forecasts are taken at 6 pm, and the high is usually around 3-6 pm, whereas the low is around 3-6 am the next morning. Most forecasters, when they forecast a low temperature, forecast the overnight low. For "tomorrow's" (one-day out) forecast, the high will occur in around 24 hours from the forecast, the low, 12 hours after that. That 12 hour difference is important one day out, but becomes less important the farther out the forecast is. This is apparent in the graph. At nine days out, the difference between high and low temperature accuracy is only 1.5%%, whereas at one day out its 7%%.
Notice also that the "within three degrees" accuracy seems to taper off, and if you draw an imaginary line, it looks almost like if you continued the accuracy forward to 10, 11, 15, etc. days out, that it would converge on an accuracy around 30%% or 35%%. You might need to click on the graph to view the larger version to notice this. This is significant because the average accuracy of a climate forecast is about 33%%. A climate forecast is taking the normal, average high and low for the day and making that your forecast. So at nine days out, forecasters still show some skill. They are better than just using the normal temperature for the day. But not by much.
Sunday, September 17, 2006
The Wall Street Journal Online Article about ForecastAdvisor
On Thursday, the Wall Street Journal Online published a column by Carl Bialik, The Numbers Guy, called "Grading Weather Forecasts". It was about what I do here at ForecastAdvisor. Thank you everyone for all the positive comments and suggestions that I have recieved from people who have read the article and have tried ForecastAdvisor.
In the article, Dr. Bruce Rose, Vice President and Principal Scientist for weather systems at The Weather Channel, stated in the article that July and August are the easiest months to forecast for temperature, with February the toughest. He's right, but I wanted to expand on that comment.
The graph below shows high and low temperature error by month for all forecasters ForecastWatch tracks. The error measurement used is what's called "RMS error", or "root-mean-squared error." This error measurement takes the error value (forecasted temperature minus actual temperature) and squares it. This makes all error values positive, and also penalizes forecasts that are way off much more than forecasts that are close. A forecast 10 degrees off is given an error four times one that is 5 degrees off, rather than just two times if the error value was not squared. All the squared errors are then averaged and the square root is taken, so that the unit value of error is still degrees.
In the graph below, each month's data point is the aggregation of about 600,000 forecasts one- to five-days out from all the providers. I think it is a fairly representative sample. Note the dips and peaks in the error graph. The error lines peak in the winter, and bottom out in the summer. The graph's y-axis starts at 3 degrees error to emphasize the difference, but even so, a winter temperature forecast has about 75%% more error on average than a summer temperature forecast.
Just like Dr. Rose said, this past February was the worst month for error in 2006 so far, and the previous July had seen the least error before that. But why is it easier to forecast temperatures in the summer than in winter? For one, even in places like Key West, Florida, with some of the most unchanging weather in the continental U.S., there is more temperature variation in winter than in summer. The more temperatures fluctuate, the harder it is to predict.
Just for fun, I've added a linear trend line to the high and low temperature error graphs. If it's to be believed, the linear trend is down, which means forecasts are slowly getting better. This past winter, temperature forecasts did better overall than the winter of 2004-2005. It could also just be El Nino starting...
December 2005 January 2006 March 2006 June 2006 July 2006 August 2006 September 2006 October 2006 November 2006 December 2006 January 2007 February 2007 March 2009 September 2009 March 2010 April 2010 February 2011 April 2011 June 2011 February 2012 September 2012 June 2013 October 2013 February 2014 June 2016 Current Posts