We investigate the estimation properties of the mixture of experts (MoE) model in a high-dimensional setting, where the number of predictors is much larger than the sample size, and for which the literature is particularly lacking in theoretical results. We consider the class of softmax-gated Gaussian MoE (SGMoE) models, defined as MoE models with softmax gating functions and Gaussian experts, and focus on the theoretical properties of their $l_1$-regularized estimation via the Lasso. To the best of our knowledge, we are the first to investigate the $l_1$-regularization properties of SGMoE models from a non-asymptotic perspective, under the mildest assumptions, namely the boundedness of the parameter space. We provide a lower bound on the regularization parameter of the Lasso penalty that ensures non-asymptotic theoretical control of the Kullback--Leibler loss of the Lasso estimator for SGMoE models. Finally, we carry out a simulation study to empirically validate our theoretical findings.
翻译:暂无翻译