Seeking informative projecting directions has been an important task in utilizing sliced Wasserstein distance in applications. However, finding these directions usually requires an iterative optimization procedure over the space of projecting directions, which is computationally expensive. Moreover, the computational issue is even more severe in deep learning applications, where computing the distance between two mini-batch probability measures is repeated several times. This nested loop has been one of the main challenges that prevent the usage of sliced Wasserstein distances based on good projections in practice. To address this challenge, we propose to utilize the learning-to-optimize technique or amortized optimization to predict the informative direction of any given two mini-batch probability measures. To the best of our knowledge, this is the first work that bridges amortized optimization and sliced Wasserstein generative models. In particular, we derive linear amortized models, generalized linear amortized models, and non-linear amortized models which are corresponding to three types of novel mini-batch losses, named amortized sliced Wasserstein. We demonstrate the favorable performance of the proposed sliced losses in deep generative modeling on standard benchmark datasets.
翻译:在应用中,寻求信息丰富的预测方向是使用切片瓦塞斯坦距离的一项重要任务。然而,找到这些方向通常需要在投影方向的空间上采用迭代优化程序,这是计算成本昂贵的。此外,计算问题在深层学习应用中甚至更为严重,因为计算两个小批量概率尺度之间的距离重复了好几次。这个嵌套环是阻止使用切片瓦塞斯坦距离的主要挑战之一,它基于实践中的良好预测。为了应对这一挑战,我们提议利用学习至优化技术或摊销优化优化程序来预测任何给定的两种小批量概率尺度测量的信息方向。我们最了解的是,这是桥梁的摊销优化和切片瓦塞斯坦基因描述模型的首次工作。特别是,我们产生了线性分解模型、普遍线性分解模型和非线性分解模型,这些模型与三种新型小批量损失相对应,即名为摊合粉碎片瓦塞斯坦。我们展示了在深层基因分析模型中拟议的切数据损失基准的有利性表现。