Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL from the perspective of algorithmic modeling, applications and theoretical analyses. For algorithmic modeling, we give a definition of MTL and then classify different MTL algorithms into five categories, including feature learning approach, low-rank approach, task clustering approach, task relation learning approach and decomposition approach as well as discussing the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, unsupervised learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, we review online, parallel and distributed MTL models as well as dimensionality reduction and feature hashing to reveal their computational and storage advantages. Many real-world applications use MTL to boost their performance and we review representative works in this paper. Finally, we present theoretical analyses and discuss several future directions for MTL.
翻译:多任务学习(MTL)是机器学习的学习范式,其目的是利用多重相关任务中的有用信息,帮助改进所有任务的总体业绩。本文从算法模型、应用和理论分析的角度对MTL进行调查。对于算法模型,我们给MTL下定义,然后将不同的MTL算法分为五类,包括特征学习方法、低级别方法、任务分组方法、任务关系学习方法和分解方法,以及讨论每种方法的特点。为了进一步提高学习任务的绩效,MTL可以与其他学习范式相结合,包括半监督学习、积极学习、无监督学习、强化学习、多视角学习和图形模型。当任务数量大或数据维度高时,我们审查在线、平行和分布的MTL模型以及维度减少和特征,以揭示其计算和储存的优势。许多真实世界应用程序使用MTL来提升其绩效和存储优势。我们讨论的是本文中的一些理论分析方向。最后,我们讨论的是,我们目前的一些理论分析。