We study a new generative modeling technique based on adversarial training (AT). We show that in a setting where the model is trained to discriminate in-distribution data from adversarial examples perturbed from out-distribution samples, the model learns the support of the in-distribution data. The learning process is also closely related to MCMC-based maximum likelihood learning of energy-based models (EBMs), and can be considered as an approximate maximum likelihood learning method. We show that this AT generative model achieves competitive image generation performance to state-of-the-art EBMs, and at the same time is stable to train and has better sampling efficiency. We demonstrate that the AT generative model is well-suited for the task of image translation and worst-case out-of-distribution detection.
翻译:我们研究了一种基于对抗性培训(AT)的新型基因模型技术。我们表明,在对模型进行培训以区别分配中受分配外抽样干扰的对抗性实例的分布数据的环境中,模型学习了分配内数据的支持。学习过程也与以MCMC为基础的基于能源模型的最大可能性学习(EBMS)密切相关,可被视为一种近似最大可能性的学习方法。我们表明,这种适应性模型在最先进的EBM中取得了具有竞争力的图像生成性能,同时稳定地进行培训,并具有更好的采样效率。我们证明,AT基因模型非常适合图像翻译和最坏的异样检测任务。