ForecastAdvisor Weather Forecast Accuracy Blog

A Blog Documenting the Intersection of Computers, Weather Forecasts, and Curiosity
 
 Subscribe in a reader    Subscribe by email:   Delivered by FeedBurner

 

June 2, 2006

Precipitation Accuracy

Precipitation forecasting, and calculating precipitation forecast accuracy, is a bit different than temperature forecasting and calculating temperature forecast accuracy.

With temperature forecasts, you are dealing in numbers. If a forecast says it is going to be 80 degrees, and it is 75, the error can be easily calculated. It was off by +5: it was 5 degrees too warm.

Precipitation is different. A forecaster might predict a "slight chance of rain", or "snow likely". If it rains on a day a forecaster predicted a "slight chance of rain", was the forecast right? What if it doesn't rain? What if it only rained 1/100th of an inch? Or half an inch? It is not nearly as clear how one should grade accuracy in that case.

The simplest thing to do is to call a forecast a "rain event" if the forecast mentions rain at all. There is some merit to this. When people hear "rain" they often think it is going to rain, even if it was preceded by "30 percent chance of". This basic accuracy statistic is useful, and it is used for the basic accuracy measurements on ForecastAdvisor. But it is basic, and we calculate more advanced statistics in ForecastWatch.

One problem is that it doesn't rain or snow about 7 out of every 10 days. So if you always forecast no precipitation, you will already be right 70%% of the time. We might then just want to look at a forecaster's ability to predict precipitation, and ignore his or her ability to predict non-precipitation, since non-precipitation is the norm. Consumers of weather forecasts might prefer this measure as well. They don't particularly care that it isn't going to rain, but they do want to know about when it will rain. One common measure of accuracy of forecasting an event (like rain or snow) is the critical success index, or the "threat score". It ignores prediction success of non-events, and is the percent of forecasts which correctly forecasted the event where the event was either forecast or actually occurred.

Another interesting statistic of precipitation forecasts is bias. Forecast temperature bias is how much higher or lower forecasts are than what actually occurred. For example, a temperature bias of 1 degree means that forecasts are, on average, one degree higher than what actually occurred. For precipitation, bias would be the ratio of predicted events to actual events. If it actually rained 30%% of the time but was forecast 31%% of the time, the bias would be 1.03. Rain was predicted to occur 3%% more than it actually occurred.

You'd think a bias of 1 (no bias) would be preferable. But that isn't always the case. Some forecasters believe that consumers would prefer a rain forecast that fails over a non-rain forecast that fails. If you can't predict with 100%% accuracy, then predicting rain more often in the cases where you aren't sure is more valuable to consumers than only predicting rain when you are sure. That reasoning makes some sense: we would rather be pleasantly surprised by a sunny day we thought would be rainy, than be caught unprepared in a rain shower when sun was forecast.

However, some might say that the economic cost of over-forecasting precipitation is higher than under-forecasting. An humorous article by Rich Adams, editor of the Cheboygan Daily Tribune, sums it up nicely:

Granted, the Weather Service can be off sometimes. Rude often pointed out that the National Weather Service on a Tuesday predicted heavy rain for a summer weekend, and by the time the weekend arrived there was nothing but sunshine and warm weather. He attributed the earlier prediction to downstate tourists canceling their weekend plans based on a prognostication five days out that turned out to be wrong. "What would you tell them?" I asked. "To look out the doggone window before they call for rain," Rude said wryly. "Be serious," I said. "OK, I would tell them to put a positive spin on things instead of a negative outlook," Rude said. "Sure, the weather report might call for a 30 percent chance of rain. That wee amount might prompt some tourist to cancel their hotel reservations while they still can. But if the National Weather Service said there was a 70 percent chance of sunshine and warm breezes, there wouldn't be any cancellations." He had a point.

The value of a precipitation forecast, or the cost of an incorrect one, might be different depending on if you are the tourist or the shop owner. Is a negative spin (or bias) better than a positive one? It depends.

 

 
Archives

December 2005   January 2006   March 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2009   September 2009   March 2010   April 2010   February 2011   April 2011   June 2011   February 2012   September 2012   June 2013   October 2013   February 2014   Current Posts