Performative prediction is a framework for learning models that influence the data they intend to predict. We focus on finding classifiers that are performatively stable, i.e. optimal for the data distribution they induce. Standard convergence results for finding a performatively stable classifier with the method of repeated risk minimization assume that the data distribution is Lipschitz continuous to the model's parameters. Under this assumption, the loss must be strongly convex and smooth in these parameters; otherwise, the method will diverge for some problems. In this work, we instead assume that the data distribution is Lipschitz continuous with respect to the model's predictions, a more natural assumption for performative systems. As a result, we are able to significantly relax the assumptions on the loss function. In particular, we do not need to assume convexity with respect to the model's parameters. As an illustration, we introduce a resampling procedure that models realistic distribution shifts and show that it satisfies our assumptions. We support our theory by showing that one can learn performatively stable classifiers with neural networks making predictions about real data that shift according to our proposed procedure.
翻译:神经网络的行为预测
Translated abstract:
行为预测是一种学习模型,可以影响其预测的数据的框架。我们的研究目标是寻找表现稳定的分类器,即对其引起的数据分布最优。重复风险最小化方法用于查找表现稳定分类器的标准收敛结果假定数据分布对模型的参数是Lipschitz连续的。在此假设下,损失必须在这些参数上是强凸且光滑的,否则对于某些问题,该方法将发散。在本文中,我们改为假设数据分布对模型的预测是Lipschitz連续的,这对于可实现的系统而言是一个更自然的假设。因此,我们能够显著放宽对损失函数的假设。特别地,我们不需要假设损失相对于模型参数是凸的。作为说明,我们介绍了一种重采样过程,模拟了现实分布变化,同时符合我们的假设。我们通过展示人们可以学习神经网络去对我们提出的按此过程变化的真实数据进行行为稳定的分类器,从而支持我们的理论。