自动编码器是一种人工神经网络,用于以无监督的方式学习有效的数据编码。自动编码器的目的是通过训练网络忽略信号“噪声”来学习一组数据的表示(编码),通常用于降维。与简化方面一起,学习了重构方面,在此,自动编码器尝试从简化编码中生成尽可能接近其原始输入的表示形式,从而得到其名称。基本模型存在几种变体,其目的是迫使学习的输入表示形式具有有用的属性。自动编码器可有效地解决许多应用问题,从面部识别到获取单词的语义。

VIP内容

主题: GANs in computer vision: Introduction to generative learning

主要内容: 在这个综述系列文章中,我们将重点讨论计算机视觉应用程序的大量GANs。具体地说,我们将慢慢地建立在导致产生性对抗网络(GAN)进化的思想和原则之上。我们将遇到不同的任务,如条件图像生成,3D对象生成,视频合成。

目录:

  • 对抗学习
  • GAN(生成对抗网络)
  • 条件生成对抗网
  • 基于深度卷积
  • 生成对抗网络的无监督表示学习
  • Info GAN: Info最大化生成对抗网的表征学习

一般来说,数据生成方法存在于各种各样的现代深度学习应用中,从计算机视觉到自然语言处理。在这一点上,我们可以用肉眼生成几乎无法区分的生成数据。生成性学习大致可分为两大类:a)变分自编码器(VAE)和b)生成性对抗网络(GAN)。

成为VIP会员查看完整内容
0
47

最新内容

Text variational autoencoders (VAEs) are notorious for posterior collapse, a phenomenon where the model's decoder learns to ignore signals from the encoder. Because posterior collapse is known to be exacerbated by expressive decoders, Transformers have seen limited adoption as components of text VAEs. Existing studies that incorporate Transformers into text VAEs (Li et al., 2020; Fang et al., 2021) mitigate posterior collapse using massive pretraining, a technique unavailable to most of the research community without extensive computing resources. We present a simple two-phase training scheme to convert a sequence-to-sequence Transformer into a VAE with just finetuning. The resulting language model is competitive with massively pretrained Transformer-based VAEs in some internal metrics while falling short on others. To facilitate training we comprehensively explore the impact of common posterior collapse alleviation techniques in the literature. We release our code for reproducability.

0
0
下载
预览

最新论文

Text variational autoencoders (VAEs) are notorious for posterior collapse, a phenomenon where the model's decoder learns to ignore signals from the encoder. Because posterior collapse is known to be exacerbated by expressive decoders, Transformers have seen limited adoption as components of text VAEs. Existing studies that incorporate Transformers into text VAEs (Li et al., 2020; Fang et al., 2021) mitigate posterior collapse using massive pretraining, a technique unavailable to most of the research community without extensive computing resources. We present a simple two-phase training scheme to convert a sequence-to-sequence Transformer into a VAE with just finetuning. The resulting language model is competitive with massively pretrained Transformer-based VAEs in some internal metrics while falling short on others. To facilitate training we comprehensively explore the impact of common posterior collapse alleviation techniques in the literature. We release our code for reproducability.

0
0
下载
预览
Top