We introduce a new method for training generative adversarial networks by applying the Wasserstein-2 metric proximal on the generators. The approach is based on Wasserstein information geometry. It defines a parametrization invariant natural gradient by pulling back optimal transport structures from probability space to parameter space. We obtain easy-to-implement iterative regularizers for the parameter updates of implicit deep generative models. Our experiments demonstrate that this method improves the speed and stability of training in terms of wall-clock time and Fr\'echet Inception Distance.
翻译:我们引入了一种新的方法,通过在发电机上应用瓦西斯坦-2号标准准度来培训基因对抗网络,该方法以瓦西斯坦信息几何为基础,通过将最优化的运输结构从概率空间拉回参数空间,界定了可变自然梯度。我们为隐含深度基因模型的参数更新获得了易于执行的迭代管理器。我们的实验表明,这一方法提高了在墙上时段和Fr\'echet感知距离方面培训的速度和稳定性。