The article proposes and theoretically analyses a \emph{computationally efficient} multi-task learning (MTL) extension of popular principal component analysis (PCA)-based supervised learning schemes \cite{barshan2011supervised,bair2006prediction}. The analysis reveals that (i) by default learning may dramatically fail by suffering from \emph{negative transfer}, but that (ii) simple counter-measures on data labels avert negative transfer and necessarily result in improved performances. Supporting experiments on synthetic and real data benchmarks show that the proposed method achieves comparable performance with state-of-the-art MTL methods but at a \emph{significantly reduced computational cost}.
翻译:文章提出并理论上分析了基于流行主要组成部分分析(PCA)的多任务学习(MTL)扩展,以流行主要组成部分分析(PCA)为基础,监督监督学习计划(\cite{barshan2011) 受监管,bair2006prection}。 分析表明(一) 默认学习可能因遭受\emph{负向转移而严重失败,但(二) 数据标签方面的简单反措施避免了负转移,必然导致业绩的改善。 合成和真实数据基准支持实验显示,拟议方法取得了与最新MTL方法相当的效绩,但计算成本却大大降低。 }