We study distributionally robust optimization with Sinkorn distance -- a variant of Wasserstein distance based on entropic regularization. We derive convex programming dual reformulations when the nominal distribution is an empirical distribution and a general distribution, respectively. Compared with Wasserstein DRO, it is computationally tractable for a larger class of loss functions, and its worst-case distribution is more reasonable. To solve the dual reformulation, we propose an efficient batch gradient descent with a bisection search algorithm. Finally, we provide various numerical examples using both synthetic and real data to demonstrate its competitive performance.
翻译:我们用Sinkorn距离研究分配强力优化,这是基于昆士兰正规化的瓦森斯坦距离的变种。当名义分配分别是一种经验性分布和一般分布时,我们得出了可调制的双重重配方程序。与Wasserstein DRO相比,它可以计算更大的损失功能类别,其最坏的分布更为合理。为了解决双重重配方,我们建议采用双节搜索算法,高效的分批梯度下降。最后,我们提供各种数字例子,既使用合成数据,又使用真实数据,以证明其具有竞争力的性能。