2004 SBIR won by WeatherFlow. New Methods to Discriminate Forecast Skill in Mesoscale Weather Predictions and Characterization and Application of Model Error Statistics.
Accurate assessment of forecast skill and confidence is an important facet of numerical weather prediction. The problem lies in that assessing forecast model performance and certainty are not absolutes. As an example, the consensus is that high-resolution forecast models have improved skill compared to lower resolution models. Traditional skill scores do not generally support this consensus. One possible reason for this paradox is that skill metrics are not appropriate by themselves for validating high-resolution models since they do not account for the time continuity of the solution and are sensitive to small displacements in time and/or space. This effort involves the development of new methods that assess skill through a system that fuses statistics such as biases, root mean square errors among others; then the same statistics, produced by a technique that transforms wind time series into frequency spectrums; and finally, a subjective forecaster analyses. The analysis includes a full meteorological description for each day, model analyses, and subjective model forecast performance. Although the third component, the subjective analysis, is not likely to be a tool for use in an operational scheme, the ability to correlate statistical successes and failures to actual meteorological events is critical to understanding and implementing an optimum method to incorporate the “best” wind behavior pattern.