In performative prediction, the choice of a model influences the distribution of future data, typically through actions taken based on the model's predictions. We initiate the study of stochastic optimization for performative prediction. What sets this setting apart from traditional stochastic optimization is the difference between merely updating model parameters and deploying the new model. The latter triggers a shift in the distribution that affects future data, while the former keeps the distribution as is. Assuming smoothness and strong convexity, we prove rates of convergence for both greedily deploying models after each stochastic update (greedy deploy) as well as for taking several updates before redeploying (lazy deploy). In both cases, our bounds smoothly recover the optimal $O(1/k)$ rate as the strength of performativity decreases. Furthermore, they illustrate how depending on the strength of performative effects, there exists a regime where either approach outperforms the other. We experimentally explore the trade-off on both synthetic data and a strategic classification simulator.
翻译:在绩效预测中,选择模型会影响未来数据的分布,通常是通过基于模型预测的行动。我们开始研究用于绩效预测的随机优化。我们开始研究用于绩效预测的随机优化。把这种设置与传统的随机优化分开的是,仅仅更新模型参数和部署新模型之间的差别。后者引发分布变化,影响未来数据,而前者则保持原样的分布。假设平稳和强烈的共性,我们证明在每次随机更新(气体部署)之后,我们贪婪地部署模型的趋同率,以及在重新部署(懒惰部署)之前进行几次更新的趋同率。在这两种情况下,我们的界限都顺利恢复了性能下降的最佳$O(1/k)的速率。此外,它们说明了如何根据性能效应的强度,存在一种制度,即两种方法都优于另一种方法。我们实验性地探索合成数据和战略分类模拟器之间的取舍。