When predictions support decisions they may influence the outcome they aim to predict. We call such predictions performative; the prediction influences the target. Performativity is a well-studied phenomenon in policy-making that has so far been neglected in supervised learning. When ignored, performativity surfaces as undesirable distribution shift, routinely addressed with retraining. We develop a risk minimization framework for performative prediction bringing together concepts from statistics, game theory, and causality. A conceptual novelty is an equilibrium notion we call performative stability. Performative stability implies that the predictions are calibrated not against past outcomes, but against the future outcomes that manifest from acting on the prediction. Our main results are necessary and sufficient conditions for the convergence of retraining to a performatively stable point of nearly minimal loss. In full generality, performative prediction strictly subsumes the setting known as strategic classification. We thus also give the first sufficient conditions for retraining to overcome strategic feedback effects.
翻译:当预测支持了可能影响其预测结果的决定时,我们称这种预测为表现性;预测影响目标。在决策中,表现性是一个研究周全的现象,在监督的学习中一直受到忽视。当忽视时,表现性表面作为不受欢迎的分布转移,通常通过再培训加以处理。我们开发了一个风险最小化框架,将统计、游戏理论和因果关系的概念汇集在一起,我们称之为表现性稳定的平衡概念。表现性稳定意味着这些预测不是根据过去的结果,而是根据预测行动所显示的未来结果加以校准。我们的主要结果是将再培训合并到几乎最低的性稳定损失点的必要和充分条件。全面概括地说,表现性预测严格地将被称为战略分类的情景归为最起码的假设。因此,我们还给再培训提供了克服战略反馈影响的最充分的条件。