Consider two or more forecasters, each making a sequence of predictions for different events over time. We ask a relatively basic question: how might we compare these forecasters, either online or post-hoc, while avoiding unverifiable assumptions on how the forecasts or outcomes were generated? This work presents a novel and rigorous answer to this question. We design a sequential inference procedure for estimating the time-varying difference in forecast quality as measured by a relatively large class of proper scoring rules (bounded scores with a linear equivalent). The resulting confidence intervals are nonasymptotically valid, and can be continuously monitored to yield statistically valid comparisons at arbitrary data-dependent stopping times ("anytime-valid"); this is enabled by adapting variance-adaptive supermartingales, confidence sequences, and e-processes to our setting. Motivated by Shafer and Vovk's game-theoretic probability, our coverage guarantees are also distribution-free, in the sense that they make no distributional assumptions on the forecasts or outcomes. In contrast to a recent work by Henzi and Ziegel, our tools can sequentially test a weak null hypothesis about whether one forecaster outperforms another on average over time. We demonstrate their effectiveness by comparing forecasts on Major League Baseball (MLB) games and statistical postprocessing methods for ensemble weather forecasts.
翻译:考虑两个或更多的预测者, 每个人对不同事件进行一系列的预测。 我们问了一个相对基本的问题: 我们如何比较这些预测者, 无论是在线还是热后, 避免无法核实的预测或结果是如何产生的假设? 这项工作为这一问题提供了一个新颖和严格的答案。 我们设计了一个顺序推论程序, 用来估计预测质量上的时间差异, 以相对大等级的适当评分规则来衡量( 以线性等值计分) 。 由此产生的信任期不具有暂时性, 并且可以不断监测, 以便在任意依赖数据的中断时间( “ 任何时间- valid” ) 中进行统计上有效的比较; 这一点可以通过调整差异适应性超配值的超配值、 信任序列和电子进程来促成; 我们设计了一个顺序推导程序, 以沙费尔和沃克的游戏理论概率为动力, 我们的保证范围也是没有分配性的, 也就是说, 与亨齐和齐格尔最近的工作形成对比, 我们的工具可以连续地测试它们的平均的天气预测结果, 。