In recent years, thanks to the rapid development of deep learning (DL), DL-based multi-task learning (MTL) has made significant progress, and it has been successfully applied to recommendation systems (RS). However, in a recommender system, the correlations among the involved tasks are complex. Therefore, the existing MTL models designed for RS suffer from negative transfer to different degrees, which will injure optimization in MTL. We find that the root cause of negative transfer is feature redundancy that features learned for different tasks interfere with each other. To alleviate the issue of negative transfer, we propose a novel multi-task learning method termed Feature Decomposition Network (FDN). The key idea of the proposed FDN is reducing the phenomenon of feature redundancy by explicitly decomposing features into task-specific features and task-shared features with carefully designed constraints. We demonstrate the effectiveness of the proposed method on two datasets, a synthetic dataset and a public datasets (i.e., Ali-CCP). Experimental results show that our proposed FDN can outperform the state-of-the-art (SOTA) methods by a noticeable margin.
翻译:近年来,由于深入学习(DL)的迅速发展,基于DL的多任务学习(MTL)取得了显著进展,并已成功地应用于建议系统(RS)。然而,在建议系统中,所涉任务之间的相互关系是复杂的,因此,为RS设计的现有的MTL模型在不同程度上发生负转移,这将损害MTL的优化。我们发现,负转移的根本原因是重复的特点,而不同任务所学到的特征相互干扰。为缓解负面转移问题,我们提议了一种新型的多任务学习方法,称为地物分解网络(FDN)。拟议FDN的主要想法是将特性明确分解为具体任务的特点和任务分担的特点,并经过仔细设计的限制,从而减少特征冗余现象。我们展示了拟议的关于两个数据集、合成数据集和公共数据集(即Ali-CCP)的方法的有效性。实验结果表明,我们提议的FDN能够通过一个显著的幅度超越状态(SOTA)方法。