The Expectation-Maximization (EM) algorithm is one of the most popular methods used to solve the problem of parametric distribution-based clustering in unsupervised learning. In this paper, we propose to analyze a generalized EM (GEM) algorithm in the context of Gaussian mixture models, where the maximization step in the EM is replaced by an increasing step. We show that this GEM algorithm can be understood as a linear time-invariant (LTI) system with a feedback nonlinearity. Therefore, we explore some of its convergence properties by leveraging tools from robust control theory. Lastly, we explain how the proposed GEM can be designed, and present a pedagogical example to understand the advantages of the proposed approach.
翻译:期望- 最大化算法( EM) 是用来在不受监督的学习中解决参数分布分组问题的最常用方法之一。 在本文中, 我们提议分析高西亚混合模型中通用的EM( GEM)算法, 其中EM的最大化步骤被日益增强的一步所取代。 我们表明, 这种GEM算法可以被理解为具有反馈非线性的一个线性时变系统。 因此, 我们通过利用强健的控制理论工具来探索其某些趋同特性。 最后, 我们解释如何设计拟议的GEM, 并提供一个教学范例, 以了解拟议方法的优点 。