We consider a problem in Multi-Task Learning (MTL) where multiple linear models are jointly trained on a collection of datasets ("tasks"). A key novelty of our framework is that it allows the sparsity pattern of regression coefficients and the values of non-zero coefficients to differ across tasks while still leveraging partially shared structure. Our methods encourage models to share information across tasks through separately encouraging 1) coefficient supports, and/or 2) nonzero coefficient values to be similar. This allows models to borrow strength during variable selection even when non-zero coefficient values differ across tasks. We propose a novel mixed-integer programming formulation for our estimator. We develop custom scalable algorithms based on block coordinate descent and combinatorial local search to obtain high-quality (approximate) solutions for our estimator. Additionally, we propose a novel exact optimization algorithm to obtain globally optimal solutions. We investigate the theoretical properties of our estimators. We formally show how our estimators leverage the shared support information across tasks to achieve better variable selection performance. We evaluate the performance of our methods in simulations and two biomedical applications. Our proposed approaches appear to outperform other sparse MTL methods in variable selection and prediction accuracy. We provide the sMTL package on CRAN.
翻译:暂无翻译