Multi-task learning aims to boost the generalization performance of multiple related tasks simultaneously by leveraging information contained in those tasks. In this paper, we propose a multi-task learning framework, where we utilize prior knowledge about the relations between features. We also impose a penalty on the coefficients changing for each specific feature to ensure related tasks have similar coefficients on common features shared among them. In addition, we capture a common set of features via group sparsity. The objective is formulated as a non-smooth convex optimization problem, which can be solved with various methods, including gradient descent method with fixed stepsize, iterative shrinkage-thresholding algorithm (ISTA) with back-tracking, and its variation -- fast iterative shrinkage-thresholding algorithm (FISTA). In light of the sub-linear convergence rate of the methods aforementioned, we propose an asymptotically linear convergent algorithm with theoretical guarantee. Empirical experiments on both regression and classification tasks with real-world datasets demonstrate that our proposed algorithms are capable of improving the generalization performance of multiple related tasks.
翻译:多任务学习旨在通过利用这些任务中包含的信息,同时促进多重相关任务的一般执行。在本文件中,我们提议一个多任务学习框架,我们利用以前对特征之间关系的知识。我们还对每个具体特征的系数变化施加惩罚,以确保相关任务的共同特征具有相似的系数。此外,我们通过群体聚度捕捉到一套共同的特征。这个目标是一个非mooth convex优化问题,可以通过各种方法加以解决,包括采用固定阶梯梯梯级梯度脱落法(ISTA)和回溯跟踪法及其变异法 -- -- 快速迭接缩缩控算法(FISTA),根据上述方法的亚线性趋同率,我们建议用理论保证,采用非线性线性趋同算法和理论保证。用真实世界数据集对回归和分类任务进行的经验性实验表明,我们提议的算法能够改进多相关任务的一般性表现。