We introduce Exemplar VAEs, a family of generative models that bridge the gap between parametric and non-parametric, exemplar based generative models. Exemplar VAE is a variant of VAE with a non-parametric prior in the latent space based on a Parzen window estimator. To sample from it, one first draws a random exemplar from a training set, then stochastically transforms that exemplar into a latent code and a new observation. We propose retrieval augmented training (RAT) as a way to speed up Exemplar VAE training by using approximate nearest neighbor search in the latent space to define a lower bound on log marginal likelihood. To enhance generalization, model parameters are learned using exemplar leave-one-out and subsampling. Experiments demonstrate the effectiveness of Exemplar VAEs on density estimation and representation learning. Importantly, generative data augmentation using Exemplar VAEs on permutation invariant MNIST and Fashion MNIST reduces classification error from 1.17% to 0.69% and from 8.56% to 8.16%.
翻译:我们引入了Exmplar VAEs, 这是一种缩小参数和非参数之间差距的基因模型的组合。 Exmplar VAE 是VAE的变体, 在基于 Parzen 窗口估计器的潜层空间中, 以Parzen 窗口估计器为基础, 在隐形空间中具有非参数的前方。 样本中, 首先从培训组中随机抽出一个示例, 然后将演示转换成隐形代码和新观察。 我们提议检索强化培训(RAT), 以加快Exmplar VAE 培训, 以此在潜在空间使用近近距离的邻居搜索, 以界定较低范围对日志边际可能性的定义。 为了提高总体化, 正在使用Exemplan- leop- one- 和子取样方法学习模型参数。 实验显示Exemplicationral VAEs对密度估计和代表学习的效果。 确实, 使用关于多变性 MNMIST和Ashion MINST 的Exemplain VAEs 将分类错误从 1. 1% 到 0.6% 和0.6% 0.6% 和0.6% 和0.6% 和0.6%减少分类错误从1.6%到0.6% 。