Generative Adversarial Networks (GANs) have recently introduced effective methods of performing Image-to-Image translations. These models can be applied and generalized to a variety of domains in Image-to-Image translation without changing any parameters. In this paper, we survey and analyze eight Image-to-Image Generative Adversarial Networks: Pix2Px, CycleGAN, CoGAN, StarGAN, MUNIT, StarGAN2, DA-GAN, and Self Attention GAN. Each of these models presented state-of-the-art results and introduced new techniques to build Image-to-Image GANs. In addition to a survey of the models, we also survey the 18 datasets they were trained on and the 9 metrics they were evaluated on. Finally, we present results of a controlled experiment for 6 of these models on a common set of metrics and datasets. The results were mixed and showed that on certain datasets, tasks, and metrics some models outperformed others. The last section of this paper discusses those results and establishes areas of future research. As researchers continue to innovate new Image-to-Image GANs, it is important that they gain a good understanding of the existing methods, datasets, and metrics. This paper provides a comprehensive overview and discussion to help build this foundation.
翻译:最近,Adversarial Networks(GANs)引入了有效的图像到图像翻译方法(GANs),这些模型可以在不改变任何参数的情况下在图像到图像翻译的不同领域应用和普及到图像到图像翻译的不同领域。在本文件中,我们调查和分析了8个图像到图像生成生成生成反虚拟网络(Pix2Px、CycyGAN、CoGAN、StarGAN、MUNIT、StarGAN2、DA-GAN和Self Toolence GAN)。这些模型中,每个模型都展示了最新的结果,并引入了建立图像到图像GAN的新技术。除了对模型进行调查外,我们还调查了18个数据集的训练及其评估的9个尺度。最后,我们介绍了这些模型中6个模型关于一套通用的计量和数据集的受控实验的结果。结果混杂在一起,并显示某些数据集、任务和指标优于其他模型。本文最后一节讨论了这些结果,并确定了这些结果,确定了未来图像的这些成果和概览领域。 研究人员们继续掌握了一种良好的数据库。