In performative prediction, a predictive model impacts the distribution that generates future data, a phenomenon that is being ignored in classical supervised learning. In this closed-loop setting, the natural measure of performance named performative risk ($\mathrm{PR}$), captures the expected loss incurred by a predictive model \emph{after} deployment. The core difficulty of using the performative risk as an optimization objective is that the data distribution itself depends on the model parameters. This dependence is governed by the environment and not under the control of the learner. As a consequence, even the choice of a convex loss function can result in a highly non-convex $\mathrm{PR}$ minimization problem. Prior work has identified a pair of general conditions on the loss and the mapping from model parameters to distributions that implies the convexity of the performative risk. In this paper, we relax these assumptions and focus on obtaining weaker notions of convexity, without sacrificing the amenability of the $\mathrm{PR}$ minimization problem for iterative optimization methods.
翻译:在绩效预测中,一个预测模型影响产生未来数据的分布,这种现象在古典监督下的学习中被忽视。在这个封闭环状环境中,自然性能测量称为性能风险(mathrm{PR}$),捕捉了预测模型(emph{fter}部署)的预期损失。将性能风险用作优化目标的核心困难在于数据分布本身取决于模型参数。这种依赖性受环境的制约,不受学习者控制。因此,即使是选择螺旋损失功能也会导致高度非碳化的$\mathrm{PR}$最小化问题。先前的工作已经确定了关于损失的一般条件,并且从模型参数到分布的绘图表明性能风险的共性。在本文件中,我们放松了这些假设,并侧重于获得较弱的共性概念,同时不牺牲对迭接性优化方法的纯度问题。