Generative networks are often trained to minimize a statistical divergence between the reference distribution and the generative one in an adversarial setting. Some works trained instead generative networks to minimize Scoring Rules, functions assessing how well the generative distribution matches each training sample individually. We show how the Scoring Rule formulation easily extends to the so-called prequential (predictive-sequential) score, whose minimization allows performing probabilistic forecasting with generative networks. This objective leads to adversarial-free training, therefore easily avoiding uncertainty underestimation due to mode collapse, which is a common issue in the adversarial setting and undesirable for probabilistic forecasting. We provide consistency guarantees for the minimizer of the prequential score and employ that to perform probabilistic forecasting for two chaotic dynamical models and a benchmark dataset of global weather observations. For this last example, we define scoring rules for spatial data by drawing from the relevant literature, with which we obtain better uncertainty quantification with little hyperparameter tuning compared to adversarial training.
翻译:生成网络往往经过培训,以尽量减少参考分布和在对抗环境下的基因变异之间的统计差异; 有些是经过培训的基因化网络,以尽量减少 Scorizing 规则; 有些是经过培训的基因化网络,以尽量减少分类分布与每个培训样本之间的差别; 我们展示Scorizing 规则的提法如何很容易地扩大到所谓的预设(预测序列)分数,这种分数的最小化使得能够用基因变异网络进行概率预测; 这个目标导致无对抗性培训,因此很容易避免由于模式崩溃而造成的不确定性低估,这是对抗性环境下的一个常见问题,不利于概率预测; 我们为最小化前的分数提供一致性保证,并采用这种方式对两个混乱的动态模型进行概率预测,以及全球天气观测的基准数据集。 关于最后一个例子,我们从有关文献中为空间数据确定评分规则,我们从这些文献中获得更好的不确定性的量化,与对抗性培训相比,我们得到很少的超参数调整。