Mixture-of-experts (MoE) models are a popular framework for modeling heterogeneity in data, for both regression and classification problems in statistics and machine learning, due to their flexibility and the abundance of statistical estimation and model choice tools. Such flexibility comes from allowing the mixture weights (or gating functions) in the MoE model to depend on the explanatory variables, along with the experts (or component densities). This permits the modeling of data arising from more complex data generating processes, compared to the classical finite mixtures and finite mixtures of regression models, whose mixing parameters are independent of the covariates. The use of MoE models in a high-dimensional setting, when the number of explanatory variables can be much larger than the sample size (i.e., $p\gg n$), is challenging from a computational point of view, and in particular from a theoretical point of view, where the literature is still lacking results in dealing with the curse of dimensionality, in both the statistical estimation and feature selection. We consider the finite mixture-of-experts model with soft-max gating functions and Gaussian experts for high-dimensional regression on heterogeneous data, and its $l_1$-regularized estimation via the Lasso. We focus on the Lasso estimation properties rather than its feature selection properties. We provide a lower bound on the regularization parameter of the Lasso function that ensures an $l_1$-oracle inequality satisfied by the Lasso estimator according to the Kullback-Leibler loss.
翻译:专家混合模型(MoE)是模拟数据差异、统计和机器学习的回归和分类问题的流行框架,因为其具有灵活性,统计估计和模型选择工具丰富。这种灵活性来自允许教育部模型的混合权重(或格量函数)与专家(或构成密度)一起依赖解释变量。这允许模拟数据,与传统的有限混合物和回归模型的有限混合物相比,从较复杂的数据生成过程产生的数据,与传统的有限混合物和回归模型的有限混合物相比,这些模型的混合参数独立于变量。在高维环境中使用教育部模型,因为解释变量的数量可能大大大于样本规模(即,$p\gg n$),这种灵活性来自计算观点,特别是理论观点,在统计估计和特征选择中,文献仍然缺乏处理维度诅咒的结果。我们通过统计评估与特征选择,用软负负负重值的混合模型模型,在高维值的参数变量中,在高维度的模型中,在高维度的模型中,在高维度的模型中,在高维的模型中,我们在高维度的解度的模型中,在高位度数据中,在我们对等的解度的测测测测度数据中,在高度数据中,在高度数据中,在高度数据中,在高度的解度的解度的解度数据中,我们提供了一个我们对等的解度的解度的解度的解度数据。