ForecastAdvisor Weather Forecast Accuracy Blog

A Blog Documenting the Intersection of Computers, Weather Forecasts, and Curiosity
 
 Subscribe in a reader    Subscribe by email:   Delivered by FeedBurner

 

Thursday, June 30, 2016


Updates for 2016

I have been lax in updating the blog here, but it's only because we have been incredibly busy.

We are continuing to improve the quality of the data, and keeping it up-to-date. Currently, "Last Year" indicates 2015 data, and "Last Month" generally is the prior month from the 20th to the 19th of the next month. So for June 20th to July 19th, "Last Month" reflects May data.

We also removed WeatherBug and CustomWeather due to issues with their forecasts, and returned World Weather Online. Look for additional changes over the next year. If you have suggestions for other providers to add, or features you would be interested in, please don't hesitate to email us.

permalink

Monday, February 10, 2014


Changes and Additions to Start 2014

The "Last Year" accuracy statistics now reflect 2013 data, and we have added Dark Sky daily forecasts to the "Last Month" statistics. The forecasts come from their forecast.io site using their public API.

Dark Sky was added after a few users of ForecastAdvisor emailed to ask if we'd include it. If you have suggestions for other providers to add, or features you would be interested in, please don't hesitate to email us.

permalink

Monday, October 28, 2013


Ongoing Problem with NWS Mobile Website Produces Hourly Data Discrepancies

More and more, it seems, weather providers are placing greater emphasis on presenting hourly forecasts for their website viewers and mobile app users. And, amazingly enough, some are forecasting temperatures and precipitation in 15-minute increments.

ForecastWatch feels this is the next frontier in the assessment of forecast accuracy. We are in the process of accumulating hourly forecasts so that customers and interested others can assess the accuracy of forecasts for these special short-term forecasts.

From time to time we take a casual, unscientific look at the hourly forecasts. In the process, we noted an error with the National Weather Service’s mobile website that has gone unresolved for many months.

For iPhone and iPad users of the NWS mobile website, the hourly information is amiss with the information that’s posted on the regular website. Here’s an example:

We checked the hourly forecasts for St. Paul, Minnesota, during the early evening of Thursday, Oct. 24, 2013. The graph below reflects a low of 28 forecast to occur at 6 a.m. and a high of 53 occurring at 3 p.m. on Friday, Oct. 25. This seems generally normal given that there are no significant warm fronts moving through at night, nor major cold fronts coming through during the daylight hours.

However, the hourly temperatures displayed by the mobile website are noticeably different. We took screen shots of the four successive six-hour forecasts capturing the next 24 hours. The mobile website data shows that the low temperature of 28 will occur at 2 a.m. and 3 a.m. It also shows that the high of 53 will be reached at 12 p.m. and 1 p.m. on Friday.

This situation is not limited to St. Paul; we also noticed the timing difference in other cities, such as Chicago and Washington. We’ve also noticed that the error occurs during evening updates and is not present at all times of the day.

Our best guess is that the problem is due to a programming error that manifests during certain times of the day. From our brief review, it appears that temperatures seem to shift ahead by three or five hours in the mobile website, thus causing the incorrect appearance that low temperatures occur in the middle of the night (rather than at dawn) and that high temperatures occur closer to noon (rather than late afternoon). With minor changes to programming for its mobile website, the NWS should be able to fix this error relatively easily and provide the appropriate and intended experience for its users.

permalink

Friday, June 28, 2013


Out With The Old, In With The New

ForecastWatch has added coverage of three weather forecast providers for both the "Last Month" and "Last Year" accuracy statistics.

We have added coverage for WeatherBug, Foreca, and MeteoGroup. WeatherBug was added due to strong ForecastAdvisor user demand, and Foreca and MeteoGroup were added as they are popular internationally and increasing their U.S. presence.

However, in order to keep the table to a reasonable size, we've also removed three providers. We removed Intellicast because that site is providing the same Weather Channel forecast, World Weather Online as its forecasts are not competitive (it always appeared in the bottom two for every location), and the NWS Digital Forecast (forecasts from the NDFD) as it is effectively a duplicate of the NWS website and isn't an end-user product.

These changes should make the site more interesting and usable. As always, please send suggestions and comments!

permalink

Wednesday, September 19, 2012


Icon Forecast Bias and Pleasant Surprises

Nate Silver wrote a book on forecasting in many different domains called "The Signal and The Noise". It will be published in a few weeks, but The New York Times excerpted a section on weather forecasting last weekend in their weekend magazine. You can read the excerpt here. It's a great read.

I wanted to focus on one of the last paragraphs in the excerpt from the New York Times. It says:

People don’t mind when a forecaster predicts rain and it turns out to be a nice day. But if it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic. “If the forecast was objective, if it has zero bias in precipitation,” Bruce Rose, a former vice president for the Weather Channel, said, “we’d probably be in trouble.”

Specifically, I'd like to shed more light on Dr. Bruce Rose's comment. First, let's look at the second part of his comment, "if it has zero bias in precipitation, we'd probably be in trouble". What is "bias in precipitation"? Bias in precipitation means that a forecaster has a tendency to forecast precipitation either more or less than you would expect given what actually happens. For example, if a forecasters forecasts precipitation 35% of the time, but there is measurable precipitation only 29% of the days, that forecaster is said to have a "wet bias". Conversely, if a forecaster only predicts precipitation 20% of the time over the same period, the forecaster is forecasting precipitation less than what actually occurs. This forecaster has a "dry bias".

Since Dr. Rose worked for The Weather Channel, I thought I would demonstrate using TWC forecasts. Let's say you get home from work, have dinner, and before you retire for the evening, you look on weather.com at tomorrow's forecast. Specifically, you look at the forecast graphic (or icon) to see if there will be precipitation tomorrow. So far this year, if you did that around the country, on average you'd see a precipitation icon 32% of the time. However, for those same locations, there was measurable precipitation only 27% of the time. The graph below shows TWC's one-day-out icon forecasts for each year from 2005 until 2012 year-to-date (January-June). It was looking at around 800 cities in the United States, or about 275,000 forecasts each year. The graph represents a total of 2,063,813 forecasts exactly from 2005 onward.

The graph makes clear that The Weather Channel has a "wet bias", that is, it forecasts precipitation in its icons more than there is actually measurable precipitation. Obviously, if TWC, or any forecaster, could be a perfect forecaster, it would have a zero bias. It would forecast precipitation only when precipitation would occur, and forecast no precipitation when it would be dry. But TWC isn't perfect, nor is any forecaster. So there will be some days it will forecast precipitation, and it will be dry, and some days it will forecast dry skies and it will pour.

Let's look at just how imperfect TWC's icon precipitation forecasts are. We know that perfection is the upper bound, but what would be the lower bound we would expect? Well, since there has only been about 27% of days with measurable precipitation this year, if we always forecast precipitation, our percent correct would be 27%. However, if we always forecast dry days, our accuracy would be 100% minus 27% or 73%. So our low bar would be 73% percent correct. The following graph shows The Weather Channel's accuracy with respect to just such an unskilled forecast. As you might expect, TWC's icon precipitation forecasts do show skill.

So TWC's forecasts aren't perfect. So far this year, they have been wrong almost 18% of the time (for one-day-out). There are going to be days when TWC isn't quite sure if it is going to rain (or snow) or not. The models are diverging, or there is some probability that the front will stall, or something. On those days, do you forecast precipitation, or not? This is where the bias that Dr. Rose talks about comes in. Generally the average consumer of forecasts would rather be pleasantly surprised by a forecast for rain that turns out sunny, than to be caught unprepared for a rain storm. So forecasters like TWC, in general, are going to bias their forecast toward precipitation. That is, when they are unsure, they are going to lean towards forecasting precipitation more than not. And that's where the wet bias happens, and why forecasters predict precipitation more than it actually occurs.

So let's look at these incorrect forecasts. In 2012 so far, TWC has been incorrect 17.7% of the time predicting tomorrow's precipitation with its icon forecast. Those wrong forecasts are one of two types: a forecast for precipitation that ends up dry, or a forecast for dry that ends up with precipitation. The first, type, for precipitation that ends up dry, is a "pleasant surprise". Or as Mr. Silver put it in his book: "People don’t mind when a forecaster predicts rain and it turns out to be a nice day." The second type, or a dry forecast that ends up wet, is "unpleasant". Or again as Mr. Silver puts it: "But if it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic." The following graph breaks down The Weather Channel's incorrect forecasts by type, either unpleasant or pleasant.

As you can see on the graph, the ratio of pleasant to unpleasant surprises is about two-to-one, though it has been declining slightly since 2007. So far in 2012, for example, 6.7% of the time TWC forecasted a dry day and there was measurable precipitation, but 11% of the time TWC forecasted precipitation and it ended up dry. So when TWC is in error, it is far more likely to be a "pleasant" error than an unpleasant one, due to TWC's wet bias. And this is why Dr. Rose states that if forecasters had zero bias (at current accuracy rates) there'd be trouble.

So let's for fun see if we can discern any patterns in TWC's icon selection algorithm. Can we figure out any rules to when a forecast will be a precipitation forecast? Well, we also collect TWC's probability of precipitation forecast. Are there any patterns to the selection of the icon that are related to PoP? So I did just that for 2012 so far, and I'd graph it for you but it's not very interesting. Basically, TWC won't show a precipitation icon when its probability of precipitation is 0%, 10%, or 20%, and will always show a precipitation icon at 30%, 40%, and higher. Since we know that 27% of the time in 2012 so far there was measurable precipitation, what that says is that when TWC believes there is any greater than climatology average change of precipitation, it will display a precipitation icon in the forecast.

What would happen if we change TWC's current rule which is to display a precipitation icon at a PoP of 30% or higher? How would that change the icon's accuracy and other properties? The following table and graph shows this for 2012 so far (January through June) for one-day-out forecasts. The highlighted row at 30% is what the accuracy properties would be if TWC placed a precipitation icon when PoP is 30% or higher, and a non-precipitation icon below 30%, which is exactly what they do. If TWC placed a precipitation icon at 0% or higher (the first row), that would mean they always place a precipitation icon. In that case, they'd be right on precipitation days (27.21% so far this year) and wrong otherwise. Conversely, if they NEVER place a precipitation icon, they would be right 72.79% of the time. Of course every time it rained or snowed that would be an unpleasant surprise as it would not have been forecasted. Sensitivity is a measure of how well the forecast identifies precipitation days. It is the percentage of correct forecasts, given that there was precipitation. Specificity is a measure of how often the forecast doesn't forecast precipitation on dry days. Icon Precip is just the total percentage of icons that would be precipitation icons under that rule.

There are a few interesting items that pop out. One is that accuracy as measured by the percent of correct icon forecasts is maximized when the rule is to only display a precipitation icon at 40% PoP or higher. That gives us a percent correct of almost 84%. The current rule TWC uses is only about 82%. Even though the current rule has a slightly lower accuracy, it is most assuredly of far greater value to TWC's customers. This for two reasons. The first is that pleasant surprises are preferred over unpleasant ones. And only thresholds at 30% or under have more pleasant surprises than unpleasant ones. The second, and probably more important reason, is that at a 30% threshold, you maximize the average of sensitivity and specificity. Stated another way, it's the point at which the distance to perfection (100% in both sensitivity and specificity) is minimized. Now that's assuming that you value sensitivity and specificity equally. But that is a whole other topic!

I hope you found this informative. There is more to come. In the meantime, please check out Dr. Eric Bickel's page here which has some interesting tables on TWC's PoP forecasts. You can also check out the paper he authored and I co-authored about Probability of Precipitation forecasts that appeared in the Monthly Weather Review titled Comparing NWS PoP Forecasts to Third-Party Providers.

permalink

Friday, February 17, 2012


Grading the Groundhogs

What did the groundhog predict this year? Well, it depends on which groundhog you’re talking about.

There are now approximately 52 meteorological-inclined woodchucks in North America. ForecastWatch recently compiled a list of their various forecasts to determine which ones are most accurate.

If groundhog prognostications are to be trusted, those seeking an early spring should take a liking to the furry forecasters, whose efforts have been celebrated since 1887. Nearly 70 percent did not see their shadow, therefore predicting an early spring. Only 16 groundhogs saw their shadow, suggesting – according to folklore – that six more weeks of winter are in store.

While the largest number of groundhog forecasters reside in Punxsutawney Phil’s home state of Pennsylvania, where there are a total of ten, the tradition takes place far and wide, encompassing most states east of the Mississippi from Canada to Cuba. There are five such forecasters in Canada, New York and North Carolina. Even Cuba, in the form of a banana rat named Guantanamo Jay, boasts a means of predicting late winter weather based on animal behavior.

In some areas with multiple groundhogs, there were conflicting forecasts. In North Carolina, for example, Grady, Queen Charlotte and Sir Walter Wally ran for cover after seeing their shadows while Mortimer and Nibbles saw nothing. And in Pennsylvania, four of ten groundhogs saw their shadow.

In Wisconsin, there is groundhog consensus for an early spring. Betty, Jimmy the Groundhog and Wynter all did not see their shadows, historically foretelling an early spring for a region whose entire winter has already been somewhat springlike.

While you’re not likely to see groundhogs appearing before greenscreens taking viewers through radars and futurecasts, they do have more creative names than their human counterparts. Rather than Joe Bastardi and Jim Cantore, groundhog monikers include the likes of Chattanooga Chuck (Tennessee), Sir Thomas Hastings (Nebraska), Shubenacadie Sam (Nova Scotia) and Woodstock Willie (Illinois).

Alas, not all groundhogs charged with predicting weather for February and March are real. When not officially of the Marmota monax species, the furry predictors can be humans in costume (six instances) or even stuffed animals (nine times).

However, for weather enthusiasts and casual weather observers, what matters most is the accuracy of groundhog predictions. ForecastWatch, the leading provider of weather accuracy assessments for meteorologists, is tracking the accuracy of this most celebrated example of weather folklore. For a listing of North America’s weather-predicting groundhogs and their actual predictions, visit Groundhog Day 2012.

permalink

Thursday, June 9, 2011


World Weather Online Added

Forecast accuracy from World Weather Online have been added, and you can see them on the "Last Month" table on both the summary and detail pages. They were added because they provide forecasts worldwide, and they use their own forecasting models and algorithms. They don't just repackage the NWS forecast or purchase forecasts from another provider. From World Weather Online's about page, World Weather Online...
"developed our own weather forecasting model which could deliver reliable and accurate weather information for any geo-point in the world. Our weather model is run along with other meteorological models to compare and deliver accurate weather forecasts."
I hope this makes ForecastAdvisor more useful and interesting!
permalink

Saturday, April 23, 2011


Why Not Us? NWS Link Now Goes to Weather.gov

When ForecastAdvisor started, the National Weather Service website was much different that it is today. Today, forecast pages come from the national site, with point forecasts being pulled directly from the National Digital Forecast Database.

Back then, the forecast pages were the responsibility of the regional offices: Eastern, Western, Central, Southern, Alaska and Pacific regions. When you typed a city, state on the main weather.gov site, you were redirected to one of regional sites for your forecast. The problem at the time was two-fold. There was no way to programmatically generate an URL to link to a specific zip code. And, the western region at that time didn't provide a zip code to city mapping. The western region's reluctance is understandable, in the west, zip codes cover huge areas, and it's difficult to pinpoint a specific city with a zip code.

So at that time, when you clicked on "National Weather Service" or "NWS Digital Forecast" in the weather forecast accuracy tables below the ForecastAdvisor forecast, it took you to a "Why Not Us?" page, letting you know that the ForecastAdvisor forecast is the NWS forecast, and that we couldn't link to the NWS forecast for your city directly.

Now that the NWS is a fully national site, and there is a way to generate an URL to link to a specific zip code's forecast, I've changed it so clicking on "National Weather Service" will now take you directly to the NWS forecast page for your city.

The "NWS Digital Forecast" link still takes you to "Why Not Us?" since the NWS Digital Forecast is the NDFD forecast and there isn't a web page for that. However, on the "Why Not Us?" page, there is a link to go to the NWS forecast page for your city, rather than just to the home page of weather.gov.

I hope this makes ForecastAdvisor even more useful to you, and I'd like to thank Anthony DeRobertis for emailing me and letting me know the NWS now has a way to generate URLs to get to specific zip code forecasts. Thanks Anthony!

permalink

Sunday, February 6, 2011


"Last Year" stats now 2010

Last year we had a bit of trouble producing the full year aggregations. Our server was under-powered and there were some database architecture and code changes that we needed to make.

All of that is complete, and we were able to complete the 2010 aggregations in a matter of days, instead of the month it took last year!

The result is, the "Last Year" statistics you see on the site now are for the full-year 2010!

permalink

Sunday, April 4, 2010


Last Year Stats Now 2009

The 2009 full year aggregations have run, and so the "Last Year" stats you see on ForecastAdvisor are now 2009 data. Also, "Last Month" is now February 2010 data.
permalink

Monday, March 29, 2010


Moving to a New Server

We are moving ForecastAdvisor and ForecastWatch websites to a new, bigger, more powerful server today (Monday, March 29, 2010). We have seen large increases in the number of visitor, customers, and forecasts we collect over the past few years. Moving to this new server will allow us to continue to grow and add new features in the coming years. You may see intermittant down-time or issues with the pages (data not being displayed, etc.) over the next 24 to 48-hours as everything gets moving and caches get updated. Thanks for your patience during this time.
permalink

Tuesday, March 16, 2010


ForecastWatch Founder On Bloomberg TV

The founder of ForecastWatch, Eric Floehr, appeared on Bloomberg TV during the 1pm news hour.

The segment discussed how weather forecasts can be used in investing and trading, and how weather strongly affects consumer decisions and the economy. Eric made the following points:

  • The commercial weather forecast providers and the NWS and EC all do a good job
  • Different weather forecasters do better in different regimes: A forecasting company may do better in the plains, another better on the coasts
  • ForecastWatch customers include The Weather Channel, Telvent DTN, and CustomWeather
  • ForecastWatch also works with weather forecast customers to understand how weather forecasts can be used in modelling and decision-making
  • A one-degree improvement in the accuracy of weather forecasts could result in a savings of one billion dollars in energy costs
  • The more a weather forecast changes, the less accurate it will be

You can watch the segment on Sling here or on Bloomberg here.

permalink

Sunday, September 6, 2009


Ten Things Your Weather Forecaster Won't Tell You

The August 2009 issue of SmartMoney Magazine featured an article called "10 Things Your Weather Forecaster Won't Tell You". It was a well-researched article with a number of good points. The ten things are:

  • "Long-term forecast? Your guess is as good as ours."
  • “We’re pretty accurate—as long as the sun is shining.”
  • “We’re often more show biz than science.”
  • “Our high-tech gizmos do everything but predict weather.”
  • “Want the temperature? Don’t ask the National Weather Service.”
  • “Weather is big business.”
  • “Bad weather means big ratings...”
  • “...and it’s always bad during sweeps week.”
  • “Accuracy? Who cares.”
  • “Weather is recessionproof.”

You can read the full article here. ForecastAdvisor is mentioned prominently. It's a very good article and definitely worth the time to read.

permalink

Friday, March 6, 2009


How Good Are Week Out Weather Forecasts?

I am often asked if forecast for a week or more out is worth anything. Worth is a hard thing to measure, but skill is easier. And if there is no skill, then likely the forecast would not be worth much.

At ForecastWatch we use two measures of skill. How much better is a weather forecast than climatology, and how much better is a weather forecast than persistence. Those are both unskilled forecasts, in that it doesn't take any skill to create them. The climate weather forecast says that the weather will be exactly like the 1971-2000 average. A persistence forecast says the weather however many days in the future will be exactly like today.

The easiest way to measure skill, at least for temperature forecasts, is to compare average absolute error. The difference between the temperature forecast, and the actual temperature that occurred, is the error. Take the positive difference, average it over a lot of forecasts, and that is a measure of how good a forecast is at predicting the temperature. An average absolute error of 3 degrees is generally better than an average absolute error of 6 degrees.

The chart below shows the average absolute high temperature forecast error for all of 2008, for all stations, and all providers. Lower is better. Red is the unskilled persistence forecast. Yellow is the climate average forecast. The green line is the average of all weather forecast providers. Lower is better. The climate average forecast is a straight line because it doesn't matter how many days you forecast out, the climate average forecast for a given date is always the same.

2008 High Temperature Error Summary

What sticks out on this graph is the intersection of the yellow climate average line with the green forecasters' line at nine days out. What that means is that the weather forecasters predicted high temperatures nine days out just as well (or just as badly) as using the 1971-2000 climate average high temperatures as the forecast prediction for each location.

What is even more interesting is that for high temperature forecasts greater than nine days out, weather forecasters (at least the ones ForecastWatch tracks) do WORSE than the climate average forecast. This means that you would do better just looking at historical average temperatures when determining the temperature more than nine days out.

The American Meteorological Society recently said as much. Every ten years, they release an information statement describing the current state of weather science. The statement released August 2007, made this observation:

"The current skill in forecasting daily weather conditions beyond eight days is relatively low."

The AMS is very wise. And in fact, for this sample, composed of nearly 18 million forecasts over the year from multiple national providers, there is no skill in daily weather forecasts beyond nine days.

On the positive side, meteorologists always did better than the persistence (tomorrow is like today) forecast. There are many other interesting things about this graph that I'll talk about later.

permalink

Tuesday, February 6, 2007


New Monthly and Yearly Accuracy Data

New data for both the monthly and yearly accuracy data tables is now in. Last year now is for 2006, and last month is December 2006. There was a small delay in getting the hourly observations, and we did some year-end audits before we ran the full-year numbers.

January 2007 data is loading now, and should be reflected over the weekend.

I'll be posting more 2006 statistics soon.

permalink

Tuesday, January 2, 2007


Educational Vacation

We just returned from a Christmas vacation to Fort Myers, Florida to visit the folks. The weather was beautiful, and we were able to go to the beach one day, go on an airboat ride in the Everglades another day, and swim pretty much every day (though I must confess that the pool was heated).

We've visited Fort Myers every holiday season for the past several years. One of our kids' favorite things to do besides swimming and the beach is visiting the Imaginarium, Fort Myers' children's science museum. The museum is owned and operated by the city of Fort Myers on the site of a historical water plant.

The kids really enjoy the weather exhibit, and the hurricane simulation (which blows air in a chamber at 45 miles per hour). The weather exhibit includes several stations. One is about clouds and includes a "cloud maker" where kids can move their hands around in the "cloud" and see how solid clouds really are. Another shows current conditions and radar maps, along with a NOAA weather radio. Yet another simulates a thunderstorm.

What my two girls enjoyed the most was the interactive TV weather studio. There is a desk and microphone, with a US map with stick on symbols for high pressure, low pressure, sunny, etc. There is a camera pointed to the desk and map, which is "broadcast" to a television. See the picture below:

Weather broadcast at the Imaginarium

The weather map is from American Educational Products (note: I have not been paid nor asked to mention this company...I just think they offer some nice educational weather products) and could be purchased for your home, school, or science center for $36 here. Thankfully, you can also buy extra weather symbols. The Imaginarium weather map was missing quite a few compared to our last visit. I'm sure they "disappear" quite frequently. I'm going to contact them to see if they need a donation.

Have you been to a good educational weather display? Please let me know!

permalink

Wednesday, December 20, 2006


Anchorage Daily News Article

George Bryson of the Anchorage Daily News was interested in helping his readers understand the accuracy of Alaska weather prediction. He wanted to give his readers a better understanding of how difficult it is to forecast weather in Alaska. In addition to consulting local meteorologists in media and the National Weather Service, he contacted ForecastWatch for data and insights about forecasting weather.

I think the article is very well written and presents the data accurately and realistically. The article was on the front page of the Sunday, November 12th issue of the newspaper, and also appeared online. You can view a PDF of the online version.

permalink

Friday, November 24, 2006


What I'm Thankful For

On this Thanksgiving day, I hope all of you are enjoying sunny skies, good food, and are surrounded by the love of family and friends.

I am certainly thankful for my wife, two children, my family, friends, the great people I get to work with every day, and the beautiful Earth we have been given.

I am also very thankful for meteorologists this Thanksgiving. They provide a service that is often under-appreciated. Their work can and does save lives. I know that people sometimes like to kid that they would like to be a meteorologist because they'd like to be in a job where they could be right only half the time and not get fired. Understanding the complex dynamics of the atmosphere and its interaction with land and sea, and predicting out many days in advance takes a lot of skill, a lot of brains, and a lot of dedication.

I will never forget the National Weather Service bulletin that went out last year on August 28. It was right before Hurricane Katrina made landfall. It began "...DEVASTATING DAMAGE EXPECTED...". It was followed by:

.HURRICANE KATRINA...A MOST POWERFUL HURRICANE WITH UNPRECEDENTED STRENGTH...RIVALING THE INTENSITY OF HURRICANE CAMILLE OF 1969. MOST OF THE AREA WILL BE UNINHABITABLE FOR WEEKS...PERHAPS LONGER. AT LEAST ONE HALF OF WELL CONSTRUCTED HOMES WILL HAVE ROOF AND WALL FAILURE. ALL GABLED ROOFS WILL FAIL...LEAVING THOSE HOMES SEVERELY DAMAGED OR DESTROYED.

Reading the bulletin sent shivers down my spine. I've been very involved with the weather as an amateur and in my business (ForecastWatch), but I'd never read anything like that. It was unprecedented. It scared me. I cannot imagine how it made people in the path of the storm feel. But however they felt, that strongly worded statement saved lives.

Don't take my word for it, though. The government report on the government's response to Hurricane Katrina titled "A Failure of Initiative: Final Report of the Select Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina" was very critical of many areas of our federal government. But it had this to say about the weather forecasters:

We reaffirmed what we already suspected — at least two federal agencies passed Katrina's test with flying colors: the National Weather Service (NWS) and the National Hurricane Center. Many who escaped the storm's wrath owe their lives to these agencies' accuracy. This hearing provided a backdrop for the remainder of our inquiry. We repeatedly tried to determine how government could respond so ineffectively to a disaster that was so accurately forecast.

In addition to the National Weather Service, Accuweather, The Weather Channel, and other private sector meteorologists helped warn citizens and helped in the response to the devastation. The Weather Channel, for example, created a message board for people looking for information about loved ones to connect, and gave more than $1 million dollars to Hurricane Katrina relief efforts.

So this Thanksgiving I am thankful for all meteorologists whether employed by the government or the private sector, and for all they do to help us plan our weekend, and keep us safe from weather disasters.

Happy Thanksgiving!

permalink

Wednesday, October 11, 2006


A Very Cool September

The September accuracy data has been aggregated and is now available. You should see the "Last month" accuracy values on your forecast page have updated. For Columbus, Ohio, for example, there wasn't much change...Weather Channel moved up from #3 to #2 while the National Weather Service did the opposite.

Temperature accuracy is beginning its seasonal dip. Overall high temperature accuracy was at its lowest in July, at 4.05 degrees error. In August it started moving back up to a peak in mid-winter. September continued the trend. Overall high temperature error in September was 4.57 degrees. You can read more about the seasonal nature of weather forecast accuracy in this blog entry.

One interesting thing to note is that it was a very cool September. ForecastWatch tracks how an unskilled climate forecast compares to weather forecasts by the weather forecast providers. But this also tells us how the climate is doing, because all we are doing is comparing climate normals with what actually happened. In September, for the about 800 observations locations we track, high temperatures were 2.33 degrees below 1971-2000 climate normals. Low temperatures, on the other hand, were only 0.14 degrees below normal. The National Climatic Data Center has said that September was the 31st coolest on record. There is more data from the NCDC here.

The map below is one available in ForecastWatch. Because it is from the perspective of the forecast, red means the forecast was too high, blue means the forecast was too low. If you want to look at it from the perspective of the actual temperatures, red areas are areas where temperatures were below climate normals, and blue where they were above.

The map shows how a climate normal forecast did in September. Red areas indicate areas where a forecast of the climate normal high was too high (in other words, the actual high temperature was below normal, on average). The blue areas are areas where the actual high temperature was above climate averages.



You can compare this map to the one produced by the NCDC. I think they are pretty comparable.

permalink

Sunday, September 24, 2006


Accuracy of Temperature Forecasts

ForecastAdvisor provides an accuracy measurement for one- to three-day-out forecasts combined. But ForecastWatch keeps data on forecasts out to nine days, and has data on forecasts for each day out. The percentage of temperatures within three degrees is a basic measure of accuracy. It isn't the only measure, but it is the accuracy measure most commonly known to non-meteorologists. Every city, it seems, has a television meteorologist who proclaims a "three degree guarantee".

Another interesting measure is a forecast "miss". If a temperature forecast is off by ten degrees or more, it is called a miss. That means that if the actual temperature was 80 degrees, a forecast is considered a miss (or a blown forecast) if the forecast temperature was 70 degrees or below, or 90 degrees or above.

The chart below shows the "within three degrees" and "missed forecasts" for both high and low temperature forecasts from all providers for one to nine days out in 2005. Not all of the providers tracked provide forecasts out nine days (and some offer even more). Each bar at one day out represents about one and a half million forecasts. Each bar at nine days out represents about 800,000 forecasts.

At one day out, for the entire country, high temperature forecasts are within three degrees of the observed afternoon high about 68%% of the time. High temperature forecasts are blown one day out about 3%% of the time. Many of these blown forecasts one day out are because of climate extremes that the models don't handle well, or timing errors with cold or warm fronts.

You might notice that the low temperature accuracy is lower than the high temperature accuracy. There are a couple of reasons for this. One, forecasts are taken at 6 pm, and the high is usually around 3-6 pm, whereas the low is around 3-6 am the next morning. Most forecasters, when they forecast a low temperature, forecast the overnight low. For "tomorrow's" (one-day out) forecast, the high will occur in around 24 hours from the forecast, the low, 12 hours after that. That 12 hour difference is important one day out, but becomes less important the farther out the forecast is. This is apparent in the graph. At nine days out, the difference between high and low temperature accuracy is only 1.5%%, whereas at one day out its 7%%.

Notice also that the "within three degrees" accuracy seems to taper off, and if you draw an imaginary line, it looks almost like if you continued the accuracy forward to 10, 11, 15, etc. days out, that it would converge on an accuracy around 30%% or 35%%. You might need to click on the graph to view the larger version to notice this. This is significant because the average accuracy of a climate forecast is about 33%%. A climate forecast is taking the normal, average high and low for the day and making that your forecast. So at nine days out, forecasters still show some skill. They are better than just using the normal temperature for the day. But not by much.

permalink

Sunday, September 17, 2006


The Wall Street Journal Online Article about ForecastAdvisor

On Thursday, the Wall Street Journal Online published a column by Carl Bialik, The Numbers Guy, called "Grading Weather Forecasts". It was about what I do here at ForecastAdvisor. Thank you everyone for all the positive comments and suggestions that I have recieved from people who have read the article and have tried ForecastAdvisor.

In the article, Dr. Bruce Rose, Vice President and Principal Scientist for weather systems at The Weather Channel, stated in the article that July and August are the easiest months to forecast for temperature, with February the toughest. He's right, but I wanted to expand on that comment.

The graph below shows high and low temperature error by month for all forecasters ForecastWatch tracks. The error measurement used is what's called "RMS error", or "root-mean-squared error." This error measurement takes the error value (forecasted temperature minus actual temperature) and squares it. This makes all error values positive, and also penalizes forecasts that are way off much more than forecasts that are close. A forecast 10 degrees off is given an error four times one that is 5 degrees off, rather than just two times if the error value was not squared. All the squared errors are then averaged and the square root is taken, so that the unit value of error is still degrees.

In the graph below, each month's data point is the aggregation of about 600,000 forecasts one- to five-days out from all the providers. I think it is a fairly representative sample. Note the dips and peaks in the error graph. The error lines peak in the winter, and bottom out in the summer. The graph's y-axis starts at 3 degrees error to emphasize the difference, but even so, a winter temperature forecast has about 75%% more error on average than a summer temperature forecast.

Overall monthly temperature forecast root-mean-squared error

Just like Dr. Rose said, this past February was the worst month for error in 2006 so far, and the previous July had seen the least error before that. But why is it easier to forecast temperatures in the summer than in winter? For one, even in places like Key West, Florida, with some of the most unchanging weather in the continental U.S., there is more temperature variation in winter than in summer. The more temperatures fluctuate, the harder it is to predict.

Just for fun, I've added a linear trend line to the high and low temperature error graphs. If it's to be believed, the linear trend is down, which means forecasts are slowly getting better. This past winter, temperature forecasts did better overall than the winter of 2004-2005. It could also just be El Nino starting...

permalink

Saturday, August 26, 2006


Weather Forecast Accuracy Gets Boost with New Computer Model

The National Center for Atmospheric Research (or NCAR), sent out a press release announcing that the high-resolution Weather Research and Forecasting model (WRF), developed by a partnership of NCAR, the National Weather Service, and over 150 research institutions, has been adopted for day-to-day operational use by civilian and military weather forecasters.

According to the press release, tests over the last year at NOAA and AFWA have shown that the new model offers multiple benefits over its predecessor models. For example:

  • Errors in nighttime temperature and humidity across the eastern United States are cut by more than 50%%.
  • The model depicts flight-level winds in the subtropics that are stronger and more realistic, thus leading to improved turbulence guidance for aircraft.
  • The model outperformed its predecessor in more than 70%% of the situations studied by AFWA.
  • WRF incorporates data from satellites, radars, and a wide range of other tools with greater ease than earlier models.

It will be very interesting to see how use of the new model trickles down into the public forecasts that ForecastWatch tracks. We'll be certainly keeping an eye on the trends and will let you know about any we see.

If you are interested in learning more about the new model, you can visit the WRF website here.

The WRF model is the replacement for the widely used MM5 model, which can run on anything from a Linux desktop to a supercomputer. The model is primarily written in Fortran, and comprises about 360,000 lines of code. You can run the model yourself by getting the source code here. It features a module-based approach, which will allow researchers to plug in their own specific models (say for hail formation, etc.) and physics schemes/solvers.

It is certainly exciting times in weather forecasting!

permalink

Monday, July 17, 2006


See Previous Forecasts

We recently introduced a new feature to ForecastAdvisor. If you click on any forecast, it will bring up a set of previous forecasts for that day. For example, go to a forecast page, say Fort Collins, Colorado (which was recently Number One on Money Magazine's 2006 Best Places To Live). The day I'm writing this, you see this forecast:

Fort Collins, Colorado Forecast Created On July 17, 2006

The current forecast for "today" is for a chance of rain showers with a low of 59° and high of 88° Fahrenheit. If you click on that forecast, the current forecast will dim and you will be shown the current and past weather forecasts for "today". As I write this, it looks like:

Fort Collins, Colorado Forecasts For July 17, 2006 Created Today and Previous Days

The forecast on the left of this weather forecast trend page is the current forecast for today. Today's forecast for today, you might say. It matches the forecast on the 5-day forecast page. The following forecasts are also forecasts for today. These forecasts for today were made on previous days.

For example, look at the forecast from two days ago. It is the third forecast (the forecast created today is first, the forecast made one day ago is second, so the forecast made two days ago is third). On that day (July 15, 2006) the forecast for today was for Mostly Sunny skies. And the temperature forecast was for 95°, now it's for 88°. That's good. It looks like the high today is going to be 89° in Fort Collins, and there have been showers in the area today.

So why was this feature added? Well, for one, curiosity. Being a weather geek I knew that weather forecasts changed frequently, but I didn't know by how much. I also didn't know if knowing how stable or unstable a forecast is would help someone understand how much the forecast should be trusted.

It's quite interesting to see how a forecast changes over time, and I do believe you can learn from it. At any rate, it gives a serious weather person more information than is presently available to help them understand the weather.

Are there any numbers to back up this feature? I took 2005 forecast data from Accuweather, The Weather Channel, Intellicast, CustomWeather, and the National Weather Service, a total of almost 1.2 million forecasts, and ran some numbers. What I was looking for is the average accuracy of the one-day-out forecast relative to how much the forecast changed. In the Fort Collins example above, the forecast changed from 89° to 88° from the current forecast to the one-day-out forecast, or 1 degree. What I did then is take all one-day-out forecasts that were one degree different and averaged how accurate they were. I did the same for zero degree different, two degree different, and so on.

The chart looks like this (click here for a larger version):

Weather Forecast High Temperature Change Analysis Graph

What this graph shows is that the smaller the difference in high temperature forecasts between the one-day-out forecast and any other day out forecast (two-day-out, three-day-out, etc.) the smaller the overall forecast error. This means that a forecast that changes a lot is likely to be more incorrect than a forecast that is stable. The dashed line is the average high temperature forecast error for all one-day-out forecasts. A forecast that changes two degrees or less between any previous forecast and the one-day-out forecast has an error average below the overall average, and that the larger the difference in forecasts, the greater the average error.

There is a lot of further analysis required before any definitive conclusions can be reached, but it's promising. And it certainly is more reason why we added the ability to view previous forecasts. I hope that you find the ability to view previous forecasts useful and enlightening as I have.

Please use the comments link below to let us know your thoughts!

permalink

Sunday, June 11, 2006


New, Cleaner Design and More

If you are reading this, you've noticed that we have unveiled our new design. This new design is cleaner and will allow us to add additional features easily. I think you will find that the forecast is easier to understand at a glance.

Thanks to Ben Hunt for the design and graphics. He did an awesome job!

If you have any comments about the design, or anything else, don't hesitate to contact us!

permalink

Friday, June 2, 2006


Precipitation Accuracy

Precipitation forecasting, and calculating precipitation forecast accuracy, is a bit different than temperature forecasting and calculating temperature forecast accuracy.

With temperature forecasts, you are dealing in numbers. If a forecast says it is going to be 80 degrees, and it is 75, the error can be easily calculated. It was off by +5: it was 5 degrees too warm.

Precipitation is different. A forecaster might predict a "slight chance of rain", or "snow likely". If it rains on a day a forecaster predicted a "slight chance of rain", was the forecast right? What if it doesn't rain? What if it only rained 1/100th of an inch? Or half an inch? It is not nearly as clear how one should grade accuracy in that case.

The simplest thing to do is to call a forecast a "rain event" if the forecast mentions rain at all. There is some merit to this. When people hear "rain" they often think it is going to rain, even if it was preceded by "30 percent chance of". This basic accuracy statistic is useful, and it is used for the basic accuracy measurements on ForecastAdvisor. But it is basic, and we calculate more advanced statistics in ForecastWatch.

One problem is that it doesn't rain or snow about 7 out of every 10 days. So if you always forecast no precipitation, you will already be right 70%% of the time. We might then just want to look at a forecaster's ability to predict precipitation, and ignore his or her ability to predict non-precipitation, since non-precipitation is the norm. Consumers of weather forecasts might prefer this measure as well. They don't particularly care that it isn't going to rain, but they do want to know about when it will rain. One common measure of accuracy of forecasting an event (like rain or snow) is the critical success index, or the "threat score". It ignores prediction success of non-events, and is the percent of forecasts which correctly forecasted the event where the event was either forecast or actually occurred.

Another interesting statistic of precipitation forecasts is bias. Forecast temperature bias is how much higher or lower forecasts are than what actually occurred. For example, a temperature bias of 1 degree means that forecasts are, on average, one degree higher than what actually occurred. For precipitation, bias would be the ratio of predicted events to actual events. If it actually rained 30%% of the time but was forecast 31%% of the time, the bias would be 1.03. Rain was predicted to occur 3%% more than it actually occurred.

You'd think a bias of 1 (no bias) would be preferable. But that isn't always the case. Some forecasters believe that consumers would prefer a rain forecast that fails over a non-rain forecast that fails. If you can't predict with 100%% accuracy, then predicting rain more often in the cases where you aren't sure is more valuable to consumers than only predicting rain when you are sure. That reasoning makes some sense: we would rather be pleasantly surprised by a sunny day we thought would be rainy, than be caught unprepared in a rain shower when sun was forecast.

However, some might say that the economic cost of over-forecasting precipitation is higher than under-forecasting. An humorous article by Rich Adams, editor of the Cheboygan Daily Tribune, sums it up nicely:

Granted, the Weather Service can be off sometimes. Rude often pointed out that the National Weather Service on a Tuesday predicted heavy rain for a summer weekend, and by the time the weekend arrived there was nothing but sunshine and warm weather. He attributed the earlier prediction to downstate tourists canceling their weekend plans based on a prognostication five days out that turned out to be wrong. "What would you tell them?" I asked. "To look out the doggone window before they call for rain," Rude said wryly. "Be serious," I said. "OK, I would tell them to put a positive spin on things instead of a negative outlook," Rude said. "Sure, the weather report might call for a 30 percent chance of rain. That wee amount might prompt some tourist to cancel their hotel reservations while they still can. But if the National Weather Service said there was a 70 percent chance of sunshine and warm breezes, there wouldn't be any cancellations." He had a point.

The value of a precipitation forecast, or the cost of an incorrect one, might be different depending on if you are the tourist or the shop owner. Is a negative spin (or bias) better than a positive one? It depends.

permalink

Saturday, March 18, 2006


Is there an NWS web issue?

I love working with meteorologists. They perform such an important role in society. But their jobs are often underappreciated because they have to play so many roles...they are part teacher, part scientist, part artist. I enjoy providing tools that help them, and enjoy their criticism that helps me learn and grow and make the tools that ForecastWatch provides even better.

"Sandy in Arizona" left a comment on this blog. Thank you Sandy...it was the first comment (woohoo!!) :-). I hope you don't mind me talking about it here. If you would like to discuss further, please don't hesitate to contact me at any of the contact points mentioned on the websites.

Sandy made the comment:

I checked Tempe AZ...the NWS digital forecasts ranked number one while the NWS forecast ranked last...there would appear to be something wrong with your methodology. There almost always is with these types of sites.

Instead of asking us why the NWS digital forecast might be ranked differently than the NWS web forecast, Sandy proclaims that there "must be a problem with [our] methodology." I would certainly love to have a discussion about the methodology and uncover if there indeed is a problem. If there is, I would like to fix it. But with such a closed-minded statement like "There almost always is with these types of sites", I don't think there is much opportunity. The comment reeks of prejudice...ForecastWatch is just like all the other sites (BTW, what other sites? Can you send me some links please?) so why bother.

So let's look at Tempe, Arizona and see what the problem is [HINT: It's NOT a problem with the methodology, rather the NWS has a problem I think they need to fix].

The NWS forecasts that are graded for accuracy on ForecastAdvisor are the public forecasts available on weather.gov. The NWS Digital forecasts come from the SOAP interface to the NDFD. Forecasts on weather.gov are queried by zipcode, NDFD by latitude/longitude. ForecastWatch collects forecasts for all AWOS/ASOS observation sites, and maps the nearest/enclosing zipcode to each observation site.

For Tempe, Arizona, the forecast and observation site is actually Phoenix, Arizona. The AWOS/ASOS observing station is KPHX and the mapped zipcode is 85065. Zipcode 85065 encloses the AWOS/ASOS observing station, so for all intents and purposes they are equivalent as far as querying forecasts go.

So let's go to weather.gov and enter a couple of zipcodes. First, let's enter the zipcode of ForecastWatch's office, 43040. It returns a forecast for Marysville, Ohio. Just as expected. How about another random zipcode...68106 for Omaha, Nebraska. Again, perfect, the NWS forecast that shows up is for Omaha, Nebraska. No surprises so far.

Now lets enter the zipcode 85065 on weather.gov. Hmmm...that's odd, it returns a forecast for Quartzsite, Arizona. That's 128 miles from Phoenix. Let's try a nearby zipcode, say Scottsdale, Arizona (85260). Hmmm...Quartzsite again. Quartzsite sure does have a lot of zipcodes. Now let's look at querying from the region page (the Phoenix office website). When you query 85065, you get "not found" (though I can assure you the zipcode does exist). When you query 85260 you get Scottsdale, as expected.

What's happening then is that ForecastWatch is querying the NWS weather.gov site just like a user would, and when it queries zipcode 85065 it gets the forecast for Quartzsite, Arizona, 128 miles away. No wonder the NWS forecast shows as being so bad. It is! Shouldn't querying weather.gov for zipcode 85065 return the forecast for Phoenix, Arizona, not Quartzsite? It appears to me that there is a zipcode mapping problem. Most zipcodes appear to work just fine, but some, like 85065, do not. Can someone from the NWS comment? Thanks!

If "Sandy from Arizona" would have enquired if there was a problem with the methodology, instead of assuming it, unknown problems can be uncovered that may point in unexpected directions, lead to quality improvements, and discover things previously unknown.

I'll comment on Sandy's other comment ("same milk in the best looking bottle") in a future blog post.

permalink

Saturday, March 4, 2006


2005 Weather As Seen Through The Internet

Since March of last year, the amount of time it takes the ForecastWatch system to get and process weather forecasts has been recorded. I was doing some troubleshooting on the web weather forecast retrieval system and as part of that troubleshooting I looked at previous retrieval times.

When I created a graph of the retrieval times, there were a few very large, very curious spikes. Since I was looking at the retrieval and processing times for the public, web forecast component, there could be a number of explanations for the spikes.

They could be a result of problems with the network between ForecastWatch.com's computers and the websites of Accuweather, The Weather Channel, MyForecast, the National Weather Service and the like. If one of the websites was undergoing maintenance, or was having problems, that might also account for the up-tick.

The spikes, where it took longer to retrieve the weather forecasts, could also be because the weather websites were busy. If the web sites were very busy, if a lot of people were trying to access, say Accuweather.com, or Weather.com, it could slow things down for others as their computers become overloaded. If that were the case, they most likely became busy because people were interested in some big weather event affecting or going to affect the United States. On a "normal" weather day, you'd expect a "normal" amount of website visitors. But if something big were happening or expected to happen (a major snowfall or hurricane, for example) people who otherwise wouldn't be visiting, or would only be visiting once, would visit many times. Traffic would go up, and response times would go down.

Here is the graph showing the time it took to retrieve all web-based weather forecasts. These are weather forecasts offered to the public by the weather forecasting companies. ForecastWatch.com also receives non-public forecasts by various means, but their retrieval times aren't included here, since they are not on public web sites.

Click here for a larger version of this graph.

Immediately, you notice four huge spikes: one each on 3/11, 9/19-9/22, 10/20, and 12/5-12/7. All four of those dates can be linked to major weather events affecting a large number of people in the United States.

On 3/11/2005, an Alberta clipper dumped significant snow on New England.

From 9/19 through 9/22, Hurricane Rita was menacing the southern United States. First Florida, and then making land fall early on the 24th on the Texas/Louisiana border.

On 10/20, Hurricane Wilma became the strongest Atlantic hurricane ever recorded, and was heading for land fall near Cancun.

Finally, 12/5 through 12/7 there was a major snowstorm from Washington, DC to Boston.

But there are a couple of major weather events that weren't in the top four.

Notably, Hurricane Katrina didn't have the same web server impact as the some of the other major hurricanes. It could be because of when ForecastWatch pulls web forecasts (the evening). Or maybe after Katrina, people became more interested in future hurricanes because they were unfortunately reminded of their deadly power.

permalink

Tuesday, January 24, 2006


Weather Forecasting Extreme

Everybody can be a weather forecaster. In fact, my two daughters, ages seven and nine, can predict the weather a year from now.

I ask them, "What is the temperature going to be like next winter?". And they answer "Cold."

They are right. So I ask them, "What about next summer?". And they correctly answer, "Warm."

They are right because temperature tends to follow averages. My children reason that because last winter was cold, it will be cold this winter. Because temperatures tend to an average, people and businesses can use that information to plan. They use that information to make better decisions. Municipalities in the northeast don't order road salt for July. And retailers in the upper plains don't display swimsuits in November. You don't need a meteorologist on staff to make those types of decisions.

If only it were that easy. As we all know, while temperatures tend to follow averages, rarely does temperature stay average. Temperatures swing wildly from above average to below average and back again, and in sometimes unpredictable ways. It's these extremes and these changes, where temperature isn't normal, that make weather interesting and keep meteorologists employed.

An electric utility makes long term decisions about how much electricity to produce based on past averages. They know that, for example, August electricity demand is higher than May electricity demand because there are more air conditioners running in August than May. This is generally true because August is warmer than May (at least where I live). But what if a meteorologist told the electric utility in August that next week in August would be much hotter than average.

They would want to know because they would want to make sure they generated enough electricity to power all those air conditioners working overtime. Because if they didn't, the alternative is brown-outs, not enough electricity to go around.

So it is often of greater value when a forecaster can predict weather extremes. This goes not only for temperature, but also other extremes: tornados, heavy winds, flooding rain. Is there a way we can look at how well forecasters predict temperature extremes?

Normally, when a weather forecast's accuracy is calculated, you take the forecast and look forward to the actual. You see how well the forecast predicted the actual temperature. If you want to look at extreme temperatures, though, you want to take the actual temperatures and look back toward the forecasts. You want to figure out how accurate forecasts are when the actual is some amount above or below the normal average expected temperature.

For example, you can look at all forecasts made for a date where the temperature was ten degrees below normal, and see how well they predicted that ten degree below normal temperature. In fact, you can do that for all days, grouped by how different the actual temperature was from the average expected climate normal. If you graph forecast temperature error grouped by that difference, you get the chart below.

Click here for a larger version of this graph.

This graph shows average error, or bias, for high temperatures. What it shows is the tendency of a forecast to be either too high or too low. If the bias were zero, that would mean that on average, forecasts were equally too high or too low, or they all were right on. If bias was negative, it would mean that, on average, forecasts tended to under-predict temperature. That is, on average, the forecasts tended to predict a lower temperature than what actually occurred. Conversely, if bias were positive, it would mean that, on average, forecasts tended to over-predict temperature, predicting a higher temperature than what actually occurred.

The first thing that you notice about the graph is that bias is not the same for all actual temperature differences from normal. When temperatures are normal, or near normal, bias is nearly zero. There is an equal chance that a temperature forecast will be either too high or too low. But for the extremes, bias tells a different story. When the actual temperature is well below normal, forecast bias tends to be positive. Forecasts tend to be too warm when the actual temperature is colder than normal. And on the other side of the graph, bias is negative. Forecasts tend to be too cold when the actual temperature is warmer than normal.

The further out the forecast, the steeper the bias' slope. That means that forecasts tend to be more conservative than actual temperatures, and that conservatism grows as the forecast is for a time further into the future. That part is expected, since a nine day out forecast is going to need to rely more on climate normals than a one day out forecast because of our current inability to accurately model instabilities the further out in the future we are trying to predict.

But that bias doesn't tell us how well forecasts predict high temperature, only how it's trending. Two forecasts, one ten degrees too high, one ten degrees too low have a bias of zero, but average ten degrees wrong. That is called absolute error. The graph below shows high temperature absolute error plotted against how far the actual high temperature was from the average climate normal.

Click here for a larger version of this graph.

What this shows is that high temperature weather forecasts tend to be most accurate when the temperature is average, right near the normal climate. That makes sense, because if you always predicted the normal average temperature, which doesn't take any skill, you would have an error of zero for days when the temperature was exactly the climate average temperature.

But if you always predicted the climate normal, your error would always equal the difference between the actual and the climate normal. So on days when the temperature is six degrees below normal, your error would be six degrees. If you look at the error curves, you can see that forecasters do better than that. But error does significantly increase when the actual temperature is further from the climate normal.

What this ultimately means is that weather forecasters don't do an equally good job of forecasting for all temperatures. If a forecaster says they have an average absolute error of 3 degrees, you cannot assume that means the forecaster can predict temperature extremes that well. And sometimes, what you are most interested in are those extremes.

ForecastWatch helps businesses and individuals understand and place value on weather forecasts so that they can be more accurately used quantitatively in modeling and prediction. We'll talk more about these graphs and what they mean in a future post.

permalink

Monday, January 23, 2006


Latest Accuracy Data Now In!

The December and full-year 2005 accuracy data has been audited and loaded into ForecastAdvisor. Now, "last month" shows December weather forecast accuracy data, and "last year" represents full-year 2005 data, rather than 2004 statistics.

permalink

Thursday, December 22, 2005


Best Places To Live If...

In August of this year, we published a fun paper called "Best Places to Live or Work If You Need to Know What the Weather Will Be Like Tomorrow." It was a serious look at how weather forecast accuracy differs depending on where you live. It's still available (for free, I might add) right here.

The paper ranked nearly 700 U.S. cities on two criteria:

  1. How accurate temperature weather forecasts for the city are
  2. How much temperatures vary day-to-day
The thought was, if you needed to know what tomorrow's weather would bring, you'd want to live somewhere where tomorrow's temperatures are much like today's, and where the weather forecasts are the most accurate. Honolulu, Hawaii came in first, while Williston, North Dakota came in last.

Not included in the original paper is the following map showing graphically the results of the paper. The more blue an area is, the less predictable weather it has. The more red, the more predictable. The dots are the cities in the report.

(Click here for a larger version)

The map graphically shows that the southeast and far west are the most predictable, mainly due to the moderating effect of large bodies of water nearby. The upper plains are the least predictable, as they are often the battleground between cold air from the north and warm air from the gulf, which makes wide temperature swings common and keeps weather forecasters up at night. Dr. John W. Enz, North Dakota's state climatologist, noted that temperature variation is perhaps the most important feature of North Dakota's climate.

There are a few oddities. For example, notice the dot of cyan in a sea of red in southern California near the Mexican border. That's Campo, California (overall rank 416). The city ranks 126th and 196th for high and low temperature persistence (in the top third), but nearly at the bottom for high and low temperature forecasting at 682nd and 681st. It looks like both the National Weather Service and Accuweather are having a hard time forecasting for the city. However, the other forecasters including The Weather Channel, Intellicast, and CustomWeather, seem to be forecasting fine. Compare this to nearby San Diego, California, where Accuweather and the National Weather Service seem to be doing fine. San Diego is 13th and 4th for high and low temperature persistence, and 42nd and 40th for high and low temperature forecast accuracy.

permalink

Monday, December 12, 2005


Home Field Advantage

Do weather forecasters do better forecasting their "local" area than a distant city, all other things being equal? Does being "local" improve a weather forecast? These are questions I occasionally hear, and I've heard both sides. Proponents say that knowing the quirks of local weather help a forecaster improve his or her forecast, and that the only way a forecaster can learn about those quirks is by living it. Opponents say that computers do much of the work of forecasting, and that weather works on a large enough scale that the location of the actual forecaster doesn't much matter.

I thought I would look at the ForecastWatch database to see if it could help me answer the question. ForecastWatch collects data on over 800 U.S. and international cities for the major forecasters: Accuweather (located in State College, PA), Weather Channel (located in Atlanta, GA), CustomWeather (located in San Francisco, CA), Intellicast (located in Andover, MA), and the National Weather Service, with many centers around the U.S. Intellicast and The Weather Channel are owned by the same company.

I decided to look at one- to three-day out high temperature forecast accuracy for all of this year, so far. There were 800 U.S. cities where accuracy was calculated. Each forecaster was ranked in order of the total high temperature error for the forecasts. The National Weather Service came in first with 252 cities out of the 800. That means the NWS had the lowest average absolute error in high temperature forecasting this year (2005, through October) in 252 cities. Next was The Weather Channel with 248, Intellicast with 125, Accuweather with 122, and finally CustomWeather with 53 cities.

But that's not real informative. What we really want to see is who won each of the forecasters "home" states. Accuweather was founded and has deep roots in State College, Pennsylvania. Many of its meteorologists came up through the renowned Penn State meteorology program. If there is a "local" advantage to meteorology, Accuweather should have it for Pennsylvania. And indeed that is the case. Of the 23 cities tracked in Pennsylvania, Accuweather came out on top with the least error for 15 of those cities. So while Accuweather came out on top in only 15%% of the cities nationwide, it won 65%% of the cities in Pennsylvania. Accuweather also came out on top in neighboring Maryland, and tied with Weather Channel in New Jersey.

How about The Weather Channel? Headquartered in Atlanta, and with most of their meteorological staff there, would they come out tops in Georgia? They do! While they get the blue ribbon in 31%% of the states overall, they come out on top in about half the cities in Georgia, and lead all providers with 8 of the 17 cities tracked (NWS is next with 5). So it looks like they have a home field advantage as well.

Intellicast should win Massachusetts then, right? Wrong. It does rather poorly there, and also in New Hampshire, which is close to their headquarters in Andover. They are owned by the same company as The Weather Channel. Perhaps there is little local meteorological support in Andover? Or maybe there really isn't a "local" weather advantage. Maybe the statistics are just coincidental.

Let's look at CustomWeather, then. CustomWeather is the smallest of the tracked forecasters, but is very competitive in a lot of statistics. Unfortunately, they only led in less than 7%% of the cities, and didn't win their home state of California. But wait...they dominate Alaska and Hawaii, states probably forgotten about by the east coast forecasters. CustomWeather won 75%% of Alaska's 18 tracked cities, and 80%% of Hawaii's five. In fact, 24 of the 53 cities CustomWeather came out on top in (almost half) are in Alaska, Hawaii, and California. Not home field advantage, but pretty close.


United States map showing where weather forecasters had the most sites ranked number one in high temperature accuracy for one to three days out

(Click here for a larger version)

I don't think this data proves anything, but it is interesting. Maybe there is something to the assertion that being local gives weather forecasters an edge.

permalink
 

 
Archives

December 2005   January 2006   March 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007   March 2009   September 2009   March 2010   April 2010   February 2011   April 2011   June 2011   February 2012   September 2012   June 2013   October 2013   February 2014   June 2016   Current Posts