In the face of uncertainty, the need for probabilistic assessments has long been recognized in the literature on forecasting. In classification, however, comparative evaluation of classifiers often focuses on predictions specifying a single class through the use of simple accuracy measures, which disregard any probabilistic uncertainty quantification. I propose probabilistic top lists as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicitable through the use of strictly consistent evaluation metrics. The proposed evaluation metrics are based on symmetric proper scoring rules and admit comparison of various types of predictions ranging from single-class point predictions to fully specified predictive distributions. The Brier score yields a metric that is particularly well suited for this kind of comparison.
翻译:面对不确定因素,预测文献早已认识到需要概率评估,但在分类方面,分类者的比较评价往往侧重于预测,通过使用简单的精确度度量来指定一个单级,而忽略了任何概率不确定性的量化。我提议将概率表作为分类中的一种新颖的预测类型,缩小单级预测和预测分布之间的差距。概率表顶级清单的功能可以通过使用严格一致的评价指标来推断。拟议评价指标以对称适当评分规则为基础,并接受对从单级点预测到完全指定的预测分布等各种预测的比较。Brier得分产生一种特别适合这种比较的衡量标准。