Multi-task learning is a framework that enforces different learning tasks to share their knowledge to improve their generalization performance. It is a hot and active domain that strives to handle several core issues; particularly, which tasks are correlated and similar, and how to share the knowledge among correlated tasks. Existing works usually do not distinguish the polarity and magnitude of feature weights and commonly rely on linear correlation, due to three major technical challenges in: 1) optimizing the models that regularize feature weight polarity, 2) deciding whether to regularize sign or magnitude, 3) identifying which tasks should share their sign and/or magnitude patterns. To address them, this paper proposes a new multi-task learning framework that can regularize feature weight signs across tasks. We innovatively formulate it as a biconvex inequality constrained optimization with slacks and propose a new efficient algorithm for the optimization with theoretical guarantees on generalization performance and convergence. Extensive experiments on multiple datasets demonstrate the proposed methods' effectiveness, efficiency, and reasonableness of the regularized feature weighted patterns.
翻译:多任务学习是一个执行不同学习任务的框架,以分享知识,提高一般化业绩;这是一个热点和活跃的领域,努力处理若干核心问题;特别是哪些任务相互关联和相似,以及如何在相关任务之间分享知识;现有工作通常不区分地物权重的两极分和大小,通常依赖线性相关关系,因为以下三大技术挑战:(1) 优化将地物权重极化的模型;(2) 决定是规范标志还是规模;(3) 确定哪些任务应分享其标志和(或)规模模式;为解决这些问题,本文件提出一个新的多任务学习框架,以规范不同任务的地物权重标志;我们创新地将其发展成一种双子不平等,以节制方式限制优化,并提出一种新的高效算法,在理论上保证普遍化性业绩和趋同性方面实现优化。关于多个数据集的广泛实验显示了拟议方法的实效、效率和合理性。