evaluatjon of non standard variables
play

Evaluatjon of non-standard variables Barbara Brown (bgb@ucar.edu) - PowerPoint PPT Presentation

Evaluatjon of non-standard variables Barbara Brown (bgb@ucar.edu) Natjonal Center for Atmospheric Research (NCAR) Boulder, Colorado, USA Verifjcatjon Methods Tutorial Berlin, Germany What makes variables non-standard Not


  1. Evaluatjon of “non-standard” variables Barbara Brown (bgb@ucar.edu) Natjonal Center for Atmospheric Research (NCAR) Boulder, Colorado, USA Verifjcatjon Methods Tutorial Berlin, Germany

  2. What makes variables “non-standard” • Not focused on commonly measured weather variables (e.g., T, Td, Wind speed, u, v, etc.) • ??? • Perhaps… – Not observed well or require special observatjons – Forecasts of things that are diffjcult to measure – Predictjons directly serve specifjc users • Partjcular events are forecasted for partjcular decision-making situatjons (e.g., C&V for determining if planes can land) • The stakes can be high! (i.e., the decisions can have major safety and/or economic impacts)

  3. Topics • Tropical cyclones • Wildfjres and fjre weather • Sea ice • Aviatjon • Resources

  4. TC Inigo 2003 TROPICAL CYCLONE FORECAST VERIFICATION TC Gillian 2013

  5. What makes TC forecast verifjcatjon “special”? • High impact weather and High impact weather Forecasts – TC weather impacts afgect large populatjons and have major economic impacts – TC weather forecasts impact disaster management decisions What is not difgerent? • TC forecasts are given intense • Informatjon is needed by Managers, Forecasters, Model atuentjon by the media and developers and End users public – in the public “eye”, so • Basic verifjcatjon methods are to speak applicable, in general (i.e., contjnuous, categorical, • Observatjons are generally probabilistjc) inferred and limited

  6. What atuributes of TC forecasts? Deterministjc • TC track – Overall error – Cross-track error – Along-track error – Landfall tjming and locatjon • Intensity • Wind fjeld – maximum wind • Size / radii – central pressure • Precipitatjon • Temporal consistency – temporal trend (rapid • Storm surge intensifjcatjon) • Waves

  7. What Atuributes of TC forecasts? Ensemble • Track distribution • Strike probability • Intensity distribution – mean / median – spread – 90 th percentile • Prob (wind > threshold) • Prob (precip > threshold • Storm surge • Landfall timing

  8. What verifjcatjon methods are appropriate? Since we are evaluatjng a variety of variables and atuributes... A variety of methods are used • Categorical Rapid intensifjcatjon / weakening • Contjnuous Intensity, track locatjon, wind, • Spatjal size, winds, precipitatjon, ... Wind structure, • Probabilistjc / ensemble precipitatjon, ... Track and intensity, locatjon ellipses, exceedance probabilitjes, precipitatjon, winds, size, strike probability, ...

  9. What about observatjons? Many hurricane observatjons are inferred... As usual there is no such thing as “truth” – but maybe more so for tropical cyclones than other phenomena Track and intensity • Identjfjed in “ Best track ” - Subjectjve analysis – Track: Latjtude, longitude – Intensity: Minimum sea level pressure, maximum 1-min surface wind speed • Best track is an analysis of all of the latest informatjon about a storm in post-analysis – Uses satellite and reconnaissance informatjon – Smoothed version of the track – Intensity ofuen subjectjvely inferred from fmight level winds or satellite informatjon (Dvorak technique) Precipitatjon and wind fjelds Over oceans limited to satellite based informatjon + data from reconnaissance

  10. Forecast characteristjcs • Forecast types – Human-generated tracks and intensity – NWP Models: Cyclone tracks are analyzed from gridded model output using a “tracker” algorithm – Statjstjcal models : Especially useful for predictjng intensity • Model interpolatjon – Needed to adjust “late” models with current track informatjon • Reference forecasts – Statjstjcal forecast or climate/persistence

  11. Quality of deterministjc TC Track forecasts Example questjons: • What are the track errors (along-track, cross- track)? • What are the intensity errors? • Are temporal intensity trends correctly predicted? • What is the error in tjming of landfall? • What is the error in forecast maximum wind (rain)? – Multj-day total precipitatjon • Is the spatjal distributjon of wind (rain) correct? Others?

  12. Total, Along-, and Cross-Track Errors Cross-track measures error in direction of movement Along-track measures Forecast error in Actual Track Actual Track speed of Track error movement Cross-Track Error (Forecast too far to the right) Along-Track Error Courtesy, J. Franklin (Forecast too slow )

  13. Track error summary T rack error is typically summarized as Average error (always positive)

  14. Verifjcatjon methods for deterministjc TC forecasts • Example: Along-track and cross-track errors • “Along-track” measures errors in “ Speed ” Courtesy James • “Cross-track” measures errors in “ Direction ” Franklin, NHC

  15. Intensity error Intensity error is typically summarized as (1)mean error (bias) or (2) mean absolute error (always positive)

  16. Alternative: Examine distributions of errors And difgerences in errors

  17. Paired comparisons: Track and intensity % improvement and p-value Target forecasts signifjcantly improve on standard of comparison for intensity forecasts

  18. Paired comparisons: Track and intensity % improvement and p-value Target forecasts signifjcantly reduce performance relative to standard of comparison for track forecasts and some intensity forecasts

  19. Evaluating features: Rapid intensifjcation and weakening (RI/RW) Using a fmexible defjnitjon of Rapid Intensifjcatjon / Rapid Weakening events Standard Defjnitjon: NHC Categorical Defjnitjon 30m/s in 24 hours statjstjcs for RI/RW events Stricter Defjnitjon: 30m/s in can then be Miss: Events in Fcst and Observed Track 12 hours calculated: do not occur at same tjme POD, FAR, CSI, etc. “Fuzzy” Defjnitjon: Adjustable window to give Hit: Events in Fcst credit even if there is a tjming and Observed Track both fall into an 18 error hour window

  20. Evaluatjng features : TC precipitatjon evaluatjon Storm-following masking with range rings Accumulated storm Shifted forecast precipitation to precipitation distributions for account for track error, with Model, Satellite, and Radar range rings around the best by range ring track

  21. WILDFIRES AND FIRE WEATHER

  22. Fire weather verifjcatjon • Wildfjre conditjons and associated weather can be predicted by humans, spread simulators, or coupled weather-fjre models • Variables of interest: – Fire perimeter – Fire rate-of-spread – Underlying wind and other Many complications weather variables – Signifjcant fjre behavior (fmame with evaluation of these variables length, pyrocumulus, etc.)

  23. Meetjng the users’ needs Australia BOM process • Focus on process to identjfy and document stakeholders’ goals • Difgerent users have difgerent needs – Management (Which model/simulator is best?) – Fire behavior analysts (How accurate are fjre predictjons?) – Simulator / Model developers (quantjfy uncertainty in weather inputs to identjfy simulator improvements needed)

  24. Observatjon issues • Fire perimeter – Observed from the air? – Satellite? – Obs are infrequent at best… • Only rare observatjons of signifjcant phenomena (fmame height, heat release, pyrocumulus, etc.) • Weather observatjons very limited… – Poor coverage in complex terrain

  25. Verifjcatjon approaches • Spatjal methods – MODE? CRA? • Contjngency table statjstjcs (TS, Bias) • Area measures Issue : What about the From Ebert presentation Monday impact of fjre suppression efgorts?

  26. SEA ICE

  27. The Challenges • Arctjc sea ice is changing dramatjcally and quickly • Climate, seasonal, and other models depend on good estjmates of sea ice extent – and other characteristjcs • Many users interested in impacts of changes in ice (shipping, mining, etc.) • Observatjons are limited… – Mainly satellite-based – Ice extent is best observed; other propertjes (thickness, concentratjon) more limited

  28. Possible verifjcatjon methods • Spatjal – MODE – CRA – Image warping • Distance metrics: – Baddeley, Hausdorfg (see methods in R package) – See references by Gilleland and others at htups://ral.ucar.edu/projects/icp/ From Arbetter 2012

  29. AVIATION WEATHER

  30. Issues • Main issue: Observatjons!! – Limited in space and tjme – Biased in space and tjme, and by event (e.g., around airports, on fmight routes; where weather is good!)

  31. Example: Icing PIREPs Notable biases in location, time, intensity Potentially systematic in areas near airports? From Brown et al. 1997 (Weather and Forecasting)

  32. EDR (turbulence) example: Automated observatjons Spatjal biases and highly skewed distributjon • Diffjcult to tune forecasts to predict “positjve” events • Turbulence forecasts may not be representatjve of areas where planes From Sharman et al. 2014 ( J. Appl. don’t fmy Climate and Weather )

  33. TAIWIN: Terminal Area Icing Weather Informatjon for NextGen (TAIWIN) • Goal: Improve NWP forecasts of precip type Snow (especially freezing rain/drizzle) to predict Rain super-cooled liquid None • First step: Identjfy appropriate Drizzle observatjons – METARs – Radar/Satellite Frz Rain – Crowd-sourced (MPING) Courtesy J. Wolf

Recommend


More recommend