In this paper, we introduce a novel family of iterative algorithms which carry out $\alpha$-divergence minimisation in a Variational Inference context. They do so by ensuring a systematic decrease at each step in the $\alpha$-divergence between the variational and the posterior distributions. In its most general form, the variational distribution is a mixture model and our framework allows us to simultaneously optimise the weights and components parameters of this mixture model. Notably, our approach permits to build on various methods previously proposed for $\alpha$-divergence minimisation such as Gradient or Power Descent schemes and we also shed a new light on an integrated Expectation Maximization algorithm. Lastly, we provide empirical evidence that our methodology yields improved results on several multimodal target distributions.
翻译:在本文中,我们引入了一套新型的迭代算法,在变式推理中采用美元/阿尔法$-调整系数最小化法,通过确保变式分布和后端分布之间以美元/阿尔法$-调整系数的每个步骤系统地减少,确保这样做。在最一般的形式上,变式分布是一种混合模式,我们的框架使我们能够同时优化这种混合模式的重量和组成部分参数。值得注意的是,我们的方法允许在以前为美元/阿尔法$-调整系数最小化提议的各种方法的基础上,如梯度或电源源生成办法,更进一步,我们还为综合的预期最大化算法提供了新的亮点。最后,我们提供了经验证据,证明我们的方法在几种多式联运目标分布上产生了更好的效果。