Multi-task learning aims to acquire a set of functions, either regressors or classifiers, that perform well for diverse tasks. At its core, the idea behind multi-task learning is to exploit the intrinsic similarity across data sources to aid in the learning process for each individual domain. In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks dependencies -- to propose a bias-variance trade-off. To control the relationship between the variance (given by the number of i.i.d. samples), and the bias (coming from data from other task), we introduce a constrained learning formulation that enforces domain specific solutions to be close to a central function. This problem is solved in the dual domain, for which we propose a stochastic primal-dual algorithm. Experimental results for a multi-domain classification problem with real data show that the proposed procedure outperforms both the task specific, as well as the single classifiers.
翻译:多任务学习旨在获得一套功能,要么是倒退者,要么是分类者,能够很好地完成各种任务。在核心方面,多任务学习背后的想法是利用数据来源之间的内在相似性,帮助每个领域的学习过程。在本文中,我们从两种极端学习情景中提取直觉 -- -- 一个是所有任务的单一函数,另一个是忽略其他任务依赖性的任务特定功能 -- -- 提出偏差差异权衡法。为了控制差异(根据i.d.样本的数量)和偏差(来自其他任务的数据)之间的关系,我们引入了一种有限的学习公式,强制实施与中心功能相近的域特定解决方案。这个问题在双重领域得到解决,为此我们提出了一个随机性原始和原始算法。一个多领域分类问题的实验结果,用真实数据显示,拟议的程序既不符合任务特定的任务,也不符合单一分类者。