We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.
翻译:我们考虑的是学习混合物模型中的 {em单} 元件的参数的任务, 当我们被给出有关该元件的 {em侧边信息} 时, 我们称之为混合物模型中的“ 搜索问题 ” 。 我们想用计算和样本复杂性小于解决所有元件的原始总问题, 也就是学习所有元件的参数。 我们的主要贡献是开发一个简单但通用的侧信息概念模型, 以及相应的基于矩阵的简单算法, 以解决这个总体环境中的搜索问题 。 然后我们专门将这个模型和算法应用于四种常见的假设: 高斯混合物模型、 LDA 主题模型、 子空间组合和混合线性回归。 对于其中的每一种, 我们证明如果( ) 侧信息是信息, 我们得到的参数估计更加精确, 并且比现有的基于时段的混合模型算法( 例如 数 方法 ) 更精确的计算复杂。 我们还说明了一些自然的方法, 可以在特定的问题实例中获取这种侧边端信息。 我们在真实数据集上的实验( NYTimes, Yelp, BSDS500) 进一步展示我们运行时程和算法的精确性。