The empirical risk minimization approach to data-driven decision making assumes that we can learn a decision rule from training data drawn under the same conditions as the ones we want to deploy it under. However, in a number of settings, we may be concerned that our training sample is biased, and that some groups (characterized by either observable or unobservable attributes) may be under- or over-represented relative to the general population; and in this setting empirical risk minimization over the training set may fail to yield rules that perform well at deployment. Building on concepts from distributionally robust optimization and sensitivity analysis, we propose a method for learning a decision rule that minimizes the worst-case risk incurred under a family of test distributions whose conditional distributions of outcomes $Y$ given covariates $X$ differ from the conditional training distribution by at most a constant factor, and whose covariate distributions are absolutely continuous with respect to the covariate distribution of the training data. We apply a result of Rockafellar and Uryasev to show that this problem is equivalent to an augmented convex risk minimization problem. We give statistical guarantees for learning a robust model using the method of sieves and propose a deep learning algorithm whose loss function captures our robustness target. We empirically validate our proposed method in simulations and a case study with the MIMIC-III dataset.
翻译:在数据驱动的决策中,以尽量降低风险为根据的经验风险的办法假定,我们可以从在与我们所要部署的数据相同的条件下获得的培训数据中学习一项决策规则;然而,在一些情况下,我们可能担心,我们的培训抽样有偏差,有些群体(按可观察或不可观察的属性分类)相对于一般人口而言,可能代表不足或过多;在这种设定中,将经验风险降到最低程度的做法可能无法产生在部署时良好的规则。根据分布式强力优化和敏感度分析中的概念,我们提议了一种学习一项决策规则的方法,以尽量减少在测试分布的大家庭中发生的最坏情况风险,而测试分布的有条件结果分布为Y$和以最经常因素计算的条件培训分布有差异,而某些群体(按可观察或不可观察的属性归类为)可能与一般人口相比,可能没有足够或过多的代表性;在这种背景下,将经验风险降到最低程度。 我们建议采用Rockafellar和Uryasev的结果,以表明这一问题相当于扩大的共性风险最小化问题。 我们提供统计保证,以便学习一个强有力的模型模型,我们用其深度的模型分析方法来验证我们模拟模型的模型,我们提出一个研究。