VIP内容

题目: Diverse Image Generation via Self-Conditioned GANs

摘要:

本文介绍了一个简单但有效的无监督方法,以产生现实和多样化的图像,并且训练了一个类条件GAN模型,而不使用手动注释的类标签。相反,模型的条件是标签自动聚类在鉴别器的特征空间。集群步骤自动发现不同的模式,并显式地要求生成器覆盖它们。在标准模式基准测试上的实验表明,该方法在寻址模式崩溃时优于其他几种竞争的方法。并且该方法在ImageNet和Places365这样的大规模数据集上也有很好的表现,与以前的方法相比,提高了图像多样性和标准质量指标。

成为VIP会员查看完整内容
0
18

最新内容

Image content is a predominant factor in marketing campaigns, websites and banners. Today, marketers and designers spend considerable time and money in generating such professional quality content. We take a step towards simplifying this process using Generative Adversarial Networks (GANs). We propose a simple and novel conditioning strategy which allows generation of images conditioned on given semantic attributes using a generator trained for an unconditional image generation task. Our approach is based on modifying latent vectors, using directional vectors of relevant semantic attributes in latent space. Our method is designed to work with both discrete (binary and multi-class) and continuous image attributes. We show the applicability of our proposed approach, named Directional GAN, on multiple public datasets, with an average accuracy of 86.4% across different attributes.

0
0
下载
预览

最新论文

Image content is a predominant factor in marketing campaigns, websites and banners. Today, marketers and designers spend considerable time and money in generating such professional quality content. We take a step towards simplifying this process using Generative Adversarial Networks (GANs). We propose a simple and novel conditioning strategy which allows generation of images conditioned on given semantic attributes using a generator trained for an unconditional image generation task. Our approach is based on modifying latent vectors, using directional vectors of relevant semantic attributes in latent space. Our method is designed to work with both discrete (binary and multi-class) and continuous image attributes. We show the applicability of our proposed approach, named Directional GAN, on multiple public datasets, with an average accuracy of 86.4% across different attributes.

0
0
下载
预览
Top