Multimodal image-to-image translation (I2IT) aims to learn a conditional distribution that explores multiple possible images in the target domain given an input image in the source domain. Conditional generative adversarial networks (cGANs) are often adopted for modeling such a conditional distribution. However, cGANs are prone to ignore the latent code and learn a unimodal distribution in conditional image synthesis, which is also known as the mode collapse issue of GANs. To solve the problem, we propose a simple yet effective method that explicitly estimates and maximizes the mutual information between the latent code and the output image in cGANs by using a deep mutual information neural estimator in this paper. Maximizing the mutual information strengthens the statistical dependency between the latent code and the output image, which prevents the generator from ignoring the latent code and encourages cGANs to fully utilize the latent code for synthesizing diverse results. Our method not only provides a new perspective from information theory to improve diversity for I2IT but also achieves disentanglement between the source domain content and the target domain style for free.
翻译:多式图像到图像翻译( I2IT) 的目的是学习一种有条件的分布, 以在源域中输入图像的方式在目标域中探索多种可能的图像。 通常会采用有条件的基因对抗网络( cGANs) 来模拟这种有条件的分布。 但是, cGANs 很容易忽略潜在代码, 在有条件的图像合成中学习单一方式的分布, 这又被称为 GANs 模式崩溃问题 。 为了解决问题, 我们提出了一个简单而有效的方法, 明确估计并最大化隐藏代码与 CGANs 中输出图像之间的相互信息, 并使用一个深层的相互信息线性显示器。 最大化的相互信息加强了潜在代码与输出图像之间的统计依赖性, 这使得生成器无法忽略潜在代码, 并鼓励 cGANs 充分利用潜在代码来合成多种结果。 我们的方法不仅从信息理论中提供了一个新的视角, 来改善 I2IT 的多样性, 而且还实现了源域内容与免费目标域样式之间的脱钩 。