Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories: feature learning approach, low-rank approach, task clustering approach, task relation learning approach, dirty approach, multi-level approach and deep learning approach. In order to compare different approaches, we discuss the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, batch MTL models are difficult to handle this situation and online, parallel and distributed MTL models as well as feature hashing are reviewed to reveal the computational and storage advantages. Many real-world applications use MTL to boost their performance and we introduce some representative works. Finally, we present theoretical analyses and discuss several future directions for MTL.
翻译:多任务学习(MTL)是机器学习的学习范式,其目的是利用多种相关任务中的有用信息,帮助提高所有任务的总体绩效。在本文件中,我们对MTL进行了调查。首先,我们将不同的MTL算法分为几类:特色学习方法、低级别方法、任务集群方法、任务关系学习方法、肮脏方法、多层次方法和深层次学习方法。为了比较不同方法,我们讨论了每种方法的特征。为了进一步改进学习任务的业绩,可以将MTL与其他学习范式相结合,包括半监督学习、积极学习、强化学习、多视角学习和图形模型模型。当任务数量大或数据多元性高时,我们提出理论分析,讨论MTL的未来方向。最后,我们提出理论分析,讨论MTL的未来方向。