We study the function approximation aspect of distributionally robust optimization (DRO) based on probability metrics, such as the Wasserstein and the maximum mean discrepancy. Our analysis leverages the insight that existing DRO paradigms hinge on function majorants such as the Moreau-Yosida regularization (supremal convolution). Deviating from those, this paper instead proposes robust learning algorithms based on smooth function approximation and interpolation. Our methods are simple in forms and apply to general loss functions without knowing functional norms a priori. Furthermore, we analyze the DRO risk bound decomposition by leveraging smooth function approximators and the convergence rate for empirical kernel mean embedding.
翻译:我们根据瓦塞斯坦和最大平均差异等概率度量法研究分布稳健优化(DRO)的功能近似方面。我们的分析利用现有DRO模式取决于功能主要方(如Moreau-Yosida ) 的洞察力,如Moreau-Yosida 正规化(超大变迁 ) 。从这些角度看,本文提出了基于平稳功能近似和内插的稳健学习算法。我们的方法形式简单,适用于一般损失函数,而没有事先了解功能规范。此外,我们通过利用平稳功能近似器和实验内核嵌入的趋同率来分析DRO风险约束分解。