Many crucial problems in deep learning and statistics are caused by a variational gap, i.e., a difference between evidence and evidence lower bound (ELBO). As a consequence, in the classical VAE model, we obtain only the lower bound on the log-likelihood since ELBO is used as a cost function, and therefore we cannot compare log-likelihood between models. In this paper, we present a general and effective upper bound of the variational gap, which allows us to efficiently estimate the true evidence. We provide an extensive theoretical study of the proposed approach. Moreover, we show that by applying our estimation, we can easily obtain lower and upper bounds for the log-likelihood of VAE models.
翻译:深层学习和统计方面的许多关键问题是由差异差异造成的,即证据和证据约束较低(ELBO)之间的差别。因此,在传统的VAE模型中,我们只获得对日志可能性的较低约束,因为ELBO被用作成本功能,因此我们无法比较模型之间的日志相似性。在本文中,我们提出了差异差异的一般性和有效上限,使我们能够有效地估计真实证据。我们对拟议方法进行了广泛的理论研究。此外,我们通过应用我们的估计,可以很容易地获得VAE模型日志相似性的上下限。