Predictive multiplicity occurs when classification models with nearly indistinguishable average performances assign conflicting predictions to individual samples. When used for decision-making in applications of consequence (e.g., lending, education, criminal justice), models developed without regard for predictive multiplicity may result in unjustified and arbitrary decisions for specific individuals. We introduce a new measure of predictive multiplicity in probabilistic classification called Rashomon Capacity. Prior metrics for predictive multiplicity focus on classifiers that output thresholded (i.e., 0-1) predicted classes. In contrast, Rashomon Capacity applies to probabilistic classifiers, capturing more nuanced score variations for individual samples. We provide a rigorous derivation for Rashomon Capacity, argue its intuitive appeal, and demonstrate how to estimate it in practice. We show that Rashomon Capacity yields principled strategies for disclosing conflicting models to stakeholders. Our numerical experiments illustrate how Rashomon Capacity captures predictive multiplicity in various datasets and learning models, including neural networks. The tools introduced in this paper can help data scientists measure, report, and ultimately resolve predictive multiplicity prior to model deployment.
翻译:当具有几乎无法区分的平均性能的分类模型对单个样本作出相互矛盾的预测时,就会出现预测的多重性。当用于应用结果(例如借贷、教育、刑事司法)的决策时,不考虑预测多样性而开发的模式可能会导致对特定个人作出不合理和任意的决定。我们引入了预测性分类的预测多样性新尺度,称为Rashomon 能力。预测性多重性指标的先前指标侧重于产出临界值(即0-1)预测等级的分类器。相比之下,Rashomon 能力适用于概率性分类器,为单个样本采集更细的分数变量。我们为Rashomon 能力提供了严格的推算方法,说明其直观吸引力,并展示如何在实践中作出估算。我们表明,Rashomon 能力生成了向利益攸关方披露相互矛盾模型的有原则性的战略。我们的数字实验说明了Rashomon 能力如何在包括神经网络在内的各种数据集和学习模型中捕捉到预测性多重性数据。本文中引入的工具可以帮助数据科学家测量、报告和最终解决模型部署前的预测性多重性。