We demonstrate a user-focused verification approach for evaluating probability forecasts of binary outcomes (also known as probabilistic classifiers) that is (i) based on proper scoring rules, (ii) focuses on user decision thresholds, and (iii) provides actionable insights. We argue that the widespread use of categorical performance diagrams and the critical success index to evaluate probabilistic forecasts may produce misleading results and instead illustrate how Murphy diagrams are better for understanding performance across user decision thresholds. The use of proper scoring rules that account for the relative importance of different user decision thresholds is shown to impact scores of overall performance, as well as supporting measures of discrimination and calibration. These methods are demonstrated by evaluating several probabilistic thunderstorm forecast systems. Furthermore, we illustrate an approach that allows a fair comparison between continuous probabilistic forecasts and categorical outlooks using the FIxed Risk Multicategorical (FIRM) score and establish the relationship between the FIRM score and Murphy diagrams. The results highlight how the performance of thunderstorm forecasts produced for tropical Australian waters varies between operational meteorologists and an automated system depending on what decision thresholds a user is acting on. A hindcast of a new automated system is shown to generally perform better than both meteorologists and the old automated system across tropical Australian waters. While the methods are illustrated using thunderstorm forecasts, they are applicable for evaluating probabilistic forecasts for any situation with binary outcomes.
翻译:暂无翻译