In this paper, we introduce a novel iterative algorithm which carries out $\alpha$-divergence minimisation by ensuring a systematic decrease in the $\alpha$-divergence at each step. In its most general form, our framework allows us to simultaneously optimise the weights and components parameters of a given mixture model. Notably, our approach permits to build on various methods previously proposed for $\alpha$-divergence minimisation such as gradient or power descent schemes. Furthermore, we shed a new light on an integrated Expectation Maximization algorithm. We provide empirical evidence that our methodology yields improved results, all the while illustrating the numerical benefits of having introduced some flexibility through the parameter $\alpha$ of the $\alpha$-divergence.
翻译:在本文中,我们引入了一种新的迭代算法,通过确保系统地减少每一步的负负负负负负负负负负最小化,确保每一步均能系统地减少负负负负负负负负负最小化。以最一般的形式,我们的框架允许我们同时优化某一混合物模型的重量和组成部分参数。值得注意的是,我们的方法允许以以前为“负负负负负负负负负负”计划等各种拟议方法为基础。此外,我们为综合的预期最大化算法提供了新的视角。我们提供了经验证据,证明我们的方法取得了更好的效果,同时说明了通过以负负负负负负负负的参数(负负负负负负负)引入某种灵活性所带来的数字效益。