Predictive multiplicity occurs when classification models with statistically indistinguishable performances assign conflicting predictions to individual samples. When used for decision-making in applications of consequence (e.g., lending, education, criminal justice), models developed without regard for predictive multiplicity may result in unjustified and arbitrary decisions for specific individuals. We introduce a new metric, called Rashomon Capacity, to measure predictive multiplicity in probabilistic classification. Prior metrics for predictive multiplicity focus on classifiers that output thresholded (i.e., 0-1) predicted classes. In contrast, Rashomon Capacity applies to probabilistic classifiers, capturing more nuanced score variations for individual samples. We provide a rigorous derivation for Rashomon Capacity, argue its intuitive appeal, and demonstrate how to estimate it in practice. We show that Rashomon Capacity yields principled strategies for disclosing conflicting models to stakeholders. Our numerical experiments illustrate how Rashomon Capacity captures predictive multiplicity in various datasets and learning models, including neural networks. The tools introduced in this paper can help data scientists measure and report predictive multiplicity prior to model deployment.
翻译:当具有统计上无法区分的性能的分类模型对单个样本作出相互矛盾的预测时,就会出现预测的多重性。当用于应用结果(例如借贷、教育、刑事司法)的决策时,不考虑预测多样性而开发的模式可能会导致对特定个人作出不合理和任意的决定。我们引入了一种新的指标,称为Rashomon 能力,以测量概率分类中的预测多样性。预测多重性指标的先前指标侧重于产出临界值(即0-1)预测等级的分类者。相比之下,Rashomon 能力适用于概率性分类器,为单个样本捕捉更细微的得分变异。我们为Rashomon 能力提供了严格的推导,说明其直觉的吸引力,并展示了如何在实际中进行估算。我们展示了Rashom 能力为利益攸关方披露相互矛盾的模式提供了有原则的战略。我们的数字实验说明了Rashomon 能力如何在包括神经网络在内的各种数据集和学习模型中捕捉到预测的多重性。本文中采用的工具可以帮助数据科学家测量和报告模型部署之前的预测多重性。