The Expectation-Maximization (EM) algorithm is one of the most popular methods used to solve the problem of parametric distribution-based clustering in unsupervised learning. In this paper, we propose to analyze a subclass of generalized EM (GEM) algorithms in the context of Gaussian mixture models, where the maximization step in the EM is replaced by an increasing step. We show that this subclass of GEM algorithms can be understood as a linear time-invariant (LTI) system with a feedback nonlinearity. Therefore, we explore some of its convergence properties by leveraging tools from robust control theory. Lastly, we explain how the proposed GEM can be designed, and present a pedagogical example to understand the advantages of the proposed approach.
翻译:期望- 最大化算法( EM) 是用来在无人监督的学习中解决基于分布的参数分组问题的最常用方法之一。 在本文中, 我们提议分析高斯混合模型中通用EM( GEM)算法的子分类, 该分类法的最大化步骤被一个日益增强的步骤所取代。 我们显示, GEM 算法的这一子分类法可以被理解为具有反馈非线性的一个线性时间- 变化( LTI) 系统。 因此, 我们通过利用强有力的控制理论工具来探索其某些趋同特性。 最后, 我们解释如何设计拟议的GEM, 并提供一个教学范例来理解拟议方法的优点 。