Maximum likelihood estimation of mixture proportions has a long history, and continues to play an important role in modern statistics, including in development of nonparametric empirical Bayes methods. Maximum likelihood of mixture proportions has traditionally been solved using the expectation maximization (EM) algorithm, but recent work by Koenker & Mizera shows that modern convex optimization techniques -- in particular, interior point methods -- are substantially faster and more accurate than EM. Here, we develop a new solution based on sequential quadratic programming (SQP). It is substantially faster than the interior point method, and just as accurate. Our approach combines several ideas: first, it solves a reformulation of the original problem; second, it uses an SQP approach to make the best use of the expensive gradient and Hessian computations; third, the SQP iterations are implemented using an active set method to exploit the sparse nature of the quadratic subproblems; fourth, it uses accurate low-rank approximations for more efficient gradient and Hessian computations. We illustrate the benefits of our approach in experiments on synthetic data sets as well as a large genetic association data set. In large data sets (n = 1,000,000 observations, m = 1,000 mixture components), our implementation achieves at least 100-fold reduction in runtime compared with a state-of-the-art interior point solver. Our methods are implemented in Julia, and in an R package available on CRAN (see https://CRAN.R-project.org/package=mixsqp).
翻译:混合物比例的最大可能性估计具有悠久的历史,并且在现代统计中继续发挥重要作用,包括制定非参数性经验贝耶斯方法。混合比例的最大可能性传统上是通过预期最大化算法解决的,但Koenker & Mizera最近的工作表明,现代二次曲线优化技术 -- -- 特别是内点方法 -- -- 比EM要快得多、更准确。在这里,我们根据连续二次编程(SQP)开发了一个新的解决方案。它比内部点方法要快得多,而且同样准确。我们的方法结合了几个想法:首先,它解决了原有问题的重新拟订;第二,它使用SQP方法,最佳地利用昂贵的梯度和赫斯计算法;第三,SQP的迭代法使用一种活跃的集成法,利用微量性四分立法的四分立法;第四,它使用精确的低级近似值来进行更高效的梯度和赫斯亚计算。我们的方法在合成数据组的实验中的好处是:首先解决原质数据的重订;第二,它使用SQP方法,最佳地利用昂贵的梯度梯度梯度梯度计算。在100-AN内部的模型中,在100-100 000号中进行大规模递减缩算。