Inspired by ideas from optimal transport theory we present Trust the Critics (TTC), a new algorithm for generative modelling. This algorithm eliminates the trainable generator from a Wasserstein GAN; instead, it iteratively modifies the source data using gradient descent on a sequence of trained critic networks. This is motivated in part by the misalignment which we observed between the optimal transport directions provided by the gradients of the critic and the directions in which data points actually move when parametrized by a trainable generator. Previous work has arrived at similar ideas from different viewpoints, but our basis in optimal transport theory motivates the choice of an adaptive step size which greatly accelerates convergence compared to a constant step size. Using this step size rule, we prove an initial geometric convergence rate in the case of source distributions with densities. These convergence rates cease to apply only when a non-negligible set of generated data is essentially indistinguishable from real data. Resolving the misalignment issue improves performance, which we demonstrate in experiments that show that given a fixed number of training epochs, TTC produces higher quality images than a comparable WGAN, albeit at increased memory requirements. In addition, TTC provides an iterative formula for the transformed density, which traditional WGANs do not. Finally, TTC can be applied to map any source distribution onto any target; we demonstrate through experiments that TTC can obtain competitive performance in image generation, translation, and denoising without dedicated algorithms.
翻译:在最佳运输理论的启发下,我们提出了相信批评者(TTC),这是基因建模的新算法。这种算法将可训练的生成器从瓦塞尔斯坦GAN中删除;相反,它用经过训练的批评网络的顺序使用梯度下降对源数据进行迭代修改。这部分是由于我们观察到的最佳运输方向与数据点在由可训练的发电机进行对称时实际移动的方向之间的偏差。以前的工作从不同角度得出了类似的想法,但我们在最佳运输理论中的基础促使人们选择适应性的步数大小,这大大加快了与固定的步数的趋同速度。使用这一步数规则,我们证明在有密度的源分布的情况下,最初的几何趋同速度。这些趋同率只有在由评论者梯度梯度提供的一组最佳运输方向和数据在由可训练的发电机进行对称时实际上与真实数据不相容时,才停止适用。 解调问题问题能提高性性,我们在实验中显示,由于培训次数固定,Tochno级的步数大大加快了趋同速度,TTC制作了质量图像,最后显示在可比较的SVWGAN的递模化分析中, 。