We present Flow-Guided Density Ratio Learning (FDRL), a simple and scalable approach to generative modeling which builds on the stale (time-independent) approximation of the gradient flow of entropy-regularized f-divergences introduced in DGflow. In DGflow, the intractable time-dependent density ratio is approximated by a stale estimator given by a GAN discriminator. This is sufficient in the case of sample refinement, where the source and target distributions of the flow are close to each other. However, this assumption is invalid for generation and a naive application of the stale estimator fails due to the large chasm between the two distributions. FDRL proposes to train a density ratio estimator such that it learns from progressively improving samples during the training process. We show that this simple method alleviates the density chasm problem, allowing FDRL to generate images of dimensions as high as $128\times128$, as well as outperform existing gradient flow baselines on quantitative benchmarks. We also show the flexibility of FDRL with two use cases. First, unconditional FDRL can be easily composed with external classifiers to perform class-conditional generation. Second, FDRL can be directly applied to unpaired image-to-image translation with no modifications needed to the framework. Code is publicly available at https://github.com/ajrheng/FDRL.
翻译:我们展示了流动引导密度比学习(FDRL),这是一种简单且可扩缩的基因模型模型方法,它建立在DGflow中引入的星流正规化的梯度流梯度(时间独立)近似值之上。在DGflow中,一个GAN歧视者给出的粘贴估计器可以大致显示难以解决的时间依赖密度比率。在样本完善中,流动的来源和目标分布彼此接近,这种方法就足够了。然而,这一假设对于生成者来说是无效的,并且由于两种分布之间的大条形体而天真地应用Sstale 估测器失败。FDRL提议培训一个密度比率,这样它就能从培训过程中逐步改进样本中学习。我们证明,这种简单的方法可以缓解密度沙丘问题,使FDRL产生高达128美元的高维度图像,在定量基准上比现有的梯度流基准要差。我们还可以直接显示FDRL的透明度框架的灵活性,在使用FDR/FDR的外部结构中可以直接使用两个案例。</s>