Distributionally robust optimization has been shown to offer a principled way to regularize learning models. In this paper, we find that Tikhonov regularization is distributionally robust in an optimal transport sense (i.e., if an adversary chooses distributions in a suitable optimal transport neighborhood of the empirical measure), provided that suitable martingale constraints are also imposed. Further, we introduce a relaxation of the martingale constraints which not only provides a unified viewpoint to a class of existing robust methods but also leads to new regularization tools. To realize these novel tools, tractable computational algorithms are proposed. As a byproduct, the strong duality theorem proved in this paper can be potentially applied to other problems of independent interest.
翻译:分布上稳健的优化已被证明是规范学习模式的一条原则性途径。 在本文中,我们发现Tikhonov的正规化在最佳运输意义上(即,如果对手选择在适当的最佳运输区进行分配,则经验性措施的适当最佳运输区)是稳妥的,条件是也施加适当的马丁格尔限制。此外,我们引入了马丁格尔限制的松绑,这不仅为某种现有稳健方法提供了统一的观点,而且还导致新的正规化工具。为了实现这些新颖工具,我们提出了可移植的计算算法。作为副产品,本文所证明的强有力的双重性理论可以适用于其他独立感兴趣的问题。