When factorized approximations are used for variational inference (VI), they tend to underestimate the uncertainty -- as measured in various ways -- of the distributions they are meant to approximate. We consider two popular ways to measure the uncertainty deficit of VI: (i) the degree to which it underestimates the componentwise variance, and (ii) the degree to which it underestimates the entropy. To better understand these effects, and the relationship between them, we examine an informative setting where they can be explicitly (and elegantly) analyzed: the approximation of a Gaussian,~$p$, with a dense covariance matrix, by a Gaussian,~$q$, with a diagonal covariance matrix. We prove that $q$ always underestimates both the componentwise variance and the entropy of $p$, \textit{though not necessarily to the same degree}. Moreover we demonstrate that the entropy of $q$ is determined by the trade-off of two competing forces: it is decreased by the shrinkage of its componentwise variances (our first measure of uncertainty) but it is increased by the factorized approximation which delinks the nodes in the graphical model of $p$. We study various manifestations of this trade-off, notably one where, as the dimension of the problem grows, the per-component entropy gap between $p$ and $q$ becomes vanishingly small even though $q$ underestimates every componentwise variance by a constant multiplicative factor. We also use the shrinkage-delinkage trade-off to bound the entropy gap in terms of the problem dimension and the condition number of the correlation matrix of $p$. Finally we present empirical results on both Gaussian and non-Gaussian targets, the former to validate our analysis and the latter to explore its limitations.
翻译:当使用分解近似的方法进行变分推理时,它们往往低估需要近似的分布的不确定性,如不同的度量所示。本文考虑使用VI测量不确定性赤字的两种常见方法:(i)它低估分量方差的程度,(ii)它低估熵的程度。为了更好地理解这些影响,以及它们之间的关系,我们检查了一种信息丰富的情况,其中它们可以得到明确(且优雅)的分析:使用对角线协方差矩阵的高斯分布$q$ 来拟合具有密集协方差矩阵的高斯分布$p$。我们证明了 $q$ 总是低估 $p$ 的分量方差和熵,但可能不是同样的程度。此外,我们证明了 $q$ 的熵由两种相互竞争的力量的权衡决定:通过分量方差的缩小(我们的第一种不确定度量)减少,但通过分解逼近导致的图模型中的节点分离增加。我们研究了这种权衡的各种表现形式,特别是一种情况,当问题的维数增加时,尽管 $q$ 通过固定乘法因子低估了每个分量方差,但 $p$ 和 $q$ 之间的每个分量的熵随着每组件增长无限接近之际,的差距变得微不足道。我们还使用缩减分离权衡来限制问题维数和 $p$ 的相关矩阵的条件数,以便界定熵差距。最后,我们对高斯和非高斯目标进行了实证研究,前者用于验证我们的分析,后者用于探索其局限性。