I present IGAN (Inferent Generative Adversarial Networks), a neural architecture that learns both a generative and an inference model on a complex high dimensional data distribution, i.e. a bidirectional mapping between data samples and a simpler low-dimensional latent space. It extends the traditional GAN framework with inference by rewriting the adversarial strategy in both the image and the latent space with an entangled game between data-latent encoded posteriors and priors. It brings a measurable stability and convergence to the classical GAN scheme, while keeping its generative quality and remaining simple and frugal in order to run on a lab PC. IGAN fosters the encoded latents to span the full prior space: this enables the exploitation of an enlarged and self-organised latent space in an unsupervised manner. An analysis of previously published articles sets the theoretical ground for the proposed algorithm. A qualitative demonstration of potential applications like self-supervision or multi-modal data translation is given on common image datasets including SAR and optical imagery.
翻译:我介绍了IGAN(Inferent Genementation Adversarial Networks),这是一个神经结构,它既学习关于复杂高维数据分布的基因和推论模型,即数据样品和较简单的低维潜层空间之间的双向绘图,又学习关于复杂高维数据分布的基因和推论模型。它扩展传统的GAN框架,通过在图像和潜在空间重写对抗战略,在图像和潜在空间重写对抗战略,同时在数据偏重的编译子和前科之间形成一个缠绕的游戏。它给古典GAN方案带来可衡量的稳定性和趋同,同时保持其基因质量,保持其简单和节制,以便在实验室PC上运行。IGAN促进编码的潜势,以不受监督的方式跨越以前的整个空间:从而得以利用扩大和自我管理的潜势空间。对以前发表的文章的分析为拟议的算法奠定了理论基础。在普通图像数据集上提供了诸如自我监督或多模式数据翻译等潜在应用的质量示范,包括合成合成和光学图像。