The variational auto-encoder (VAE) is a deep latent variable model that has two neural networks in an autoencoder-like architecture; one of them parameterizes the model's likelihood. Fitting its parameters via maximum likelihood (ML) is challenging since the computation of the marginal likelihood involves an intractable integral over the latent space; thus the VAE is trained instead by maximizing a variational lower bound. Here, we develop a ML training scheme for VAEs by introducing unbiased estimators of the log-likelihood gradient. We obtain the estimators by augmenting the latent space with a set of importance samples, similarly to the importance weighted auto-encoder (IWAE), and then constructing a Markov chain Monte Carlo coupling procedure on this augmented space. We provide the conditions under which the estimators can be computed in finite time and with finite variance. We show experimentally that VAEs fitted with unbiased estimators exhibit better predictive performance.
翻译:变式自动编码器(VAE)是一种深潜潜伏模型,在类似自动编码器的建筑中有两个神经网络;其中之一是参数化模型的可能性。通过最大可能性(ML)调整其参数具有挑战性,因为边际可能性的计算涉及对潜在空间的难以操作的组成部分;因此,变式自动编码器通过最大变式下限来培训;在这里,我们通过引入对日志相似度梯度的公正估计来开发VAEs ML培训计划。我们通过增加一组重要样本来获取估计空间,这些样本与加权自动编码器(IWAE)相似,然后在这个扩大的空间上构建一个Markov连锁的蒙特卡洛联动程序。我们提供了测量器在有限时间和有限差异下进行计算的条件。我们实验性地显示,带有公平估计器的VAEs展示了更好的预测性表现。