Distributionally robust optimization (DRO) can improve the robustness and fairness of learning methods. In this paper, we devise stochastic algorithms for a class of DRO problems including group DRO, subpopulation fairness, and empirical conditional value at risk (CVaR) optimization. Our new algorithms achieve faster convergence rates than existing algorithms for multiple DRO settings. We also provide a new information-theoretic lower bound that implies our bounds are tight for group DRO. Empirically, too, our algorithms outperform known methods
翻译:分布稳健的优化( DRO) 能够提高学习方法的稳健性和公正性。 在本文中, 我们为一组 DRO 问题设计了随机算法, 包括组 DRO 、 亚人口公平性、 实验性有条件风险价值( CVaR ) 优化。 我们的新算法比多个 DRO 设置的现有算法更快地达到趋同率。 我们还提供了一个新的信息理论下限, 意味着我们对于组 DRO 的界限很紧。 随机的, 我们的算法也比已知的方法要好。