The central objective function of a variational autoencoder (VAE) is its variational lower bound. Here we show that for standard VAEs the variational bound converges to a value given by the sum of three entropies: the (negative) entropy of the latent distribution, the expected (negative) entropy of the observable distribution, and the average entropy of the variational distributions. Our derived analytical results are exact and apply for small as well as complex neural networks for decoder and encoder. Furthermore, they apply for finitely and infinitely many data points and at any stationary point (including local and global maxima). As a consequence, we show that the variance parameters of encoder and decoder play the key role in determining the values of variational bounds at stationary points. Furthermore, the obtained results can allow for closed-form analytical expressions at points of convergence, which may be unexpected as neither variational lower bounds of VAEs nor log-likelihoods of VAEs are closed-form during learning. As our main contribution, we provide the proofs for convergence of standard VAEs to sums of entropies. Furthermore, we numerically verify our analytical results and discuss some potential applications. The obtained equality to entropy sums provides novel information on those points in parameter space that variational learning converges to. As such, we believe, they can contribute to our understanding of established as well as novel VAE approaches.
翻译:变异自动编码器( VAE) 的中心目标功能是它的变异性下限。 这里我们显示, 对标准的 VAE 来说, 变异性约束会集中到由三种寄生虫总和(包括本地和全球峰值)给出的数值中: 潜在分布的( 负) 变异性( 负) 、 可见分布的预期( 负) 和变异分布的平均酶。 我们的分析结果准确, 适用于小型和复杂的神经网络, 用于解码器和编码器。 此外, 标准VAE 的变异性约束点和任何固定点( 包括本地和全球峰值) 。 因此, 我们显示, 暗色和变异变的变量参数在确定值值值值中起着关键作用。 此外, 所获得的结果可以允许在趋同点进行封闭式分析表达, 这可能出意料之外, 因为它们的变异变性理解范围或对VAE的日志相似性关系在学习过程中是封闭的。 正如我们的主要变异性分析结果, 我们的变性分析结果可以提供我们获得的变性数据, 我们的变异性分析结果中的一些结果, 和变性分析结果, 我们的变数级结果可以提供我们获得的变数的精确性结果, 我们的精确的精确的精确性结果, 。