Deep generative models such as GANs have driven impressive advances in conditional image synthesis in recent years. A persistent challenge has been to generate diverse versions of output images from the same input image, due to the problem of mode collapse: because only one ground truth output image is given per input image, only one mode of the conditional distribution is modelled. In this paper, we focus on this problem of multimodal conditional image synthesis and build on the recently proposed technique of Implicit Maximum Likelihood Estimation (IMLE). Prior IMLE-based methods required different architectures for different tasks, which limit their applicability, and were lacking in fine details in the generated images. We propose CAM-Net, a unified architecture that can be applied to a broad range of tasks. Additionally, it is capable of generating convincing high frequency details, achieving a reduction of the Frechet Inception Distance (FID) by up to 45.3% compared to the baseline.
翻译:GANs等深层基因模型近年来在有条件图像合成方面取得了令人印象深刻的进步。由于模式崩溃问题,从同一输入图像中生成不同版本的输出图像是一个持续的挑战:因为每个输入图像只给出一个地面真实输出图像,只有一种有条件分布模式是模拟的。在本文中,我们侧重于多式有条件图像合成问题,并借鉴了最近提出的隐性最大相似度估计技术(IMLE ) 。以前基于IMLE 的方法需要不同任务的不同结构,这些结构限制了其适用性,而且生成的图像缺乏精细的细节。我们提议CAM-Net,这是一个可用于广泛任务的统一结构。此外,它能够产生令人信服的高频细节,使Frechet Invition距离(Frecht)比基线减少45.3%。