We propose a novel and unified Cycle in Cycle Generative Adversarial Network (C2GAN) for generating human faces, hands, bodies, and natural scenes. Our proposed C2GAN is a cross-modal model exploring the joint exploitation of the input image data and guidance data in an interactive manner. C2GAN contains two different generators, i.e., an image-generation generator and a guidance-generation generator. Both generators are mutually connected and trained in an end-to-end fashion and explicitly form three cycled subnets, i.e., one image generation cycle and two guidance generation cycles. Each cycle aims at reconstructing the input domain and simultaneously produces a useful output involved in the generation of another cycle. In this way, the cycles constrain each other implicitly providing complementary information from both image and guidance modalities and bringing an extra supervision gradient across the cycles, facilitating a more robust optimization of the whole model. Extensive results on four guided image-to-image translation subtasks demonstrate that the proposed C2GAN is effective in generating more realistic images compared with state-of-the-art models. The code is available at https://github.com/Ha0Tang/C2GAN.
翻译:我们提议在循环生成反转网络(C2GAN)中采用新颖和统一的循环周期,以产生人的脸、手、身体和自然场景。我们提议的C2GAN是一个跨模式模型,探索以互动方式联合利用输入图像数据和指导数据。C2GAN包含两种不同的发电机,即图像生成生成生成器和指导生成器。两种发电机都以端至端的方式相互连接和培训,并明确组成三个循环子网络,即一个图像生成周期和两个指导生成周期。每个周期的目的是重建输入域,同时产生另一个周期生成的有用产出。这样,周期就相互制约,从图像和指导模式和导出出出互为补充的信息,并给整个周期带来额外的监督梯度,促进对整个模型进行更强有力的优化。四个有指导的图像到图像翻译子任务的广泛结果显示,拟议的C2GANAN(即一个图像生成周期和两个指导生成周期)与国家艺术模型相比,在生成更符合现实的图像方面是有效的。代码可在 https://gith/AN2/HAHA.0/HAG/AN0.com查阅。