The importance of Variational Autoencoders reaches far beyond standalone generative models -- the approach is also used for learning latent representations and can be generalized to semi-supervised learning. This requires a thorough analysis of their commonly known shortcomings: posterior collapse and approximation errors. This paper analyzes VAE approximation errors caused by the combination of the ELBO objective with the choice of the encoder probability family, in particular under conditional independence assumptions. We identify the subclass of generative models consistent with the encoder family. We show that the ELBO optimizer is pulled from the likelihood optimizer towards this consistent subset. Furthermore, this subset can not be enlarged, and the respective error cannot be decreased, by only considering deeper encoder networks.
翻译:Variational Autoencolders 的重要性远远超出独立的基因模型 -- -- 这种方法也用于学习潜在的表征,可以推广到半监督的学习。这需要彻底分析其众所周知的缺点:后遗症崩溃和近似错误。本文分析了ELBO目标与编码器概率家庭选择相结合造成的VAE近似差错,特别是在有条件的独立假设下。我们确定了与编码器家庭一致的基因模型亚类。我们表明ELBO优化器是从概率优化器提取到这个一致的子集的。此外,这一子集不能扩大,而相应的差错不能通过仅考虑更深的编码器网络而减少。