Variational auto-encoders (VAE) are popular deep latent variable models which are trained by maximizing an Evidence Lower Bound (ELBO). To obtain tighter ELBO and hence better variational approximations, it has been proposed to use importance sampling to get a lower variance estimate of the evidence. However, importance sampling is known to perform poorly in high dimensions. While it has been suggested many times in the literature to use more sophisticated algorithms such as Annealed Importance Sampling (AIS) and its Sequential Importance Sampling (SIS) extensions, the potential benefits brought by these advanced techniques have never been realized for VAE: the AIS estimate cannot be easily differentiated, while SIS requires the specification of carefully chosen backward Markov kernels. In this paper, we address both issues and demonstrate the performance of the resulting Monte Carlo VAEs on a variety of applications.
翻译:变化式自动编码器(VAE)是广受欢迎的深潜变异模型,通过最大限度地提高证据下角(ELBO)来培训这些模型。为了获得更严格的ELBO和更好的变异近似值,建议使用重要取样对证据进行较低的差异估计,但人们知道,重要取样在高维度方面表现不佳。虽然文献中多次建议使用更复杂的算法,如Annaaled重要性取样(AIS)及其序列重要性取样扩展,但这些先进技术给VAE带来的潜在好处从未实现:AIS估计无法轻易区分,而SIS则要求精心选择的后后方Markov内核的规格。本文中,我们讨论这两个问题,并展示由此产生的Monte Carlo VAE在各种应用方面的表现。