The ubiquity of implicit feedback makes them the default choice to build modern recommender systems. Generally speaking, observed interactions are considered as positive samples, while unobserved interactions are considered as negative ones. However, implicit feedback is inherently noisy because of the ubiquitous presence of noisy-positive and noisy-negative interactions. Recently, some studies have noticed the importance of denoising implicit feedback for recommendations, and enhanced the robustness of recommendation models to some extent. Nonetheless, they typically fail to (1) capture the hard yet clean interactions for learning comprehensive user preference, and (2) provide a universal denoising solution that can be applied to various kinds of recommendation models. In this paper, we thoroughly investigate the memorization effect of recommendation models, and propose a new denoising paradigm, i.e., Self-Guided Denoising Learning (SGDL), which is able to collect memorized interactions at the early stage of the training (i.e., "noise-resistant" period), and leverage those data as denoising signals to guide the following training (i.e., "noise-sensitive" period) of the model in a meta-learning manner. Besides, our method can automatically switch its learning phase at the memorization point from memorization to self-guided learning, and select clean and informative memorized data via a novel adaptive denoising scheduler to improve the robustness. We incorporate SGDL with four representative recommendation models (i.e., NeuMF, CDAE, NGCF and LightGCN) and different loss functions (i.e., binary cross-entropy and BPR loss). The experimental results on three benchmark datasets demonstrate the effectiveness of SGDL over the state-of-the-art denoising methods like T-CE, IR, DeCA, and even state-of-the-art robust graph-based methods like SGCN and SGL.
翻译:隐含反馈的无处不在使得它们成为建立现代建议系统的默认选择。 一般来说, 观察到的互动被视为积极的样本, 而没有观察到的互动则被视为消极的样本。 然而, 隐含反馈本身就具有内在的噪音。 最近, 一些研究注意到了对建议进行隐含反馈的重要性, 在某种程度上加强了建议模式的稳健性。 尽管如此, 它们通常未能(1) 捕捉硬但清洁的相互作用, 以学习全面的用户偏好, 并且(2) 提供一种可应用于各种建议模式的普遍脱色解决方案。 在本文件中,我们彻底调查建议模式的记忆效应,并提出一个新的淡化模式, 即自调自控的Donesoisa Learning(SDL), 能够在培训的早期阶段( 即“ 软化” 期间) 收集隐含记忆的反馈, 增强建议模式( 类似“ 网络化 ” 时期), 以及将这些数据作为解析的信号用于指导随后的培训( i) ( i), “ 神经敏感度” 的模型, 和 磁度” 将 SDFl 学习阶段的自我学习阶段, 通过S- real- real- real- real 系统 系统 系统 系统 系统,, 系统 将数据 系统化 升级 升级 系统化 系统化 系统化 系统化 系统化, 系统化 系统化 系统化 系统化 系统化,, 系统化 系统化 系统化,, 系统化, 系统化 系统化, 系统化, 系统化, 系统化 系统化,, 系统化,, 系统化, 和 系统化 系统化, 系统化 系统化 系统化 系统化 系统化 系统化,,,,,,, 系统化, 系统化 系统化, 系统化 系统化, 系统化 系统化 系统化 系统化 系统化 系统化 系统化,,,,, 系统化,, 系统化, 系统化, 系统化,, 系统化, 系统化, 系统化, 系统化