Correlated outcomes are common in many practical problems. In some settings, one outcome is of particular interest, and others are auxiliary. To leverage information shared by all the outcomes, traditional multi-task learning (MTL) minimizes an averaged loss function over all the outcomes, which may lead to biased estimation for the target outcome, especially when the MTL model is mis-specified. In this work, based on a decomposition of estimation bias into two types, within-subspace and against-subspace, we develop a robust transfer learning approach to estimating a high-dimensional linear decision rule for the outcome of interest with the presence of auxiliary outcomes. The proposed method includes an MTL step using all outcomes to gain efficiency, and a subsequent calibration step using only the outcome of interest to correct both types of biases. We show that the final estimator can achieve a lower estimation error than the one using only the single outcome of interest. Simulations and real data analysis are conducted to justify the superiority of the proposed method.
翻译:许多实际问题都存在相关的输出变量。在某些情况下,一个变量特别重要, 其他的则是辅助的。为了利用所有输出变量的信息,传统的多任务学习(MTL)会最小化所有输出变量的平均损失函数,这可能会导致目标变量的估算存在偏差,尤其是当MTL模型被错误规定时。在本文工作中,我们将估算偏差分解为两种类型:子空间内部和子空间外部傾向,开发了一种鲁棒的迁移学习方法,用于在辅助结果存在的情况下,估算高维线性决策规则的目标输出变量。提出的方法包括使用所有输出变量的MTL步骤来提高效率,并使用仅针对目标变量的校准步骤来修正两种类型的偏差。我们展示了最终估算器可以实现比仅使用单个目标变量的估算器更低的估算误差。通过模拟和真实数据分析,证明了该方法的优越性。