Learning from implicit feedback is one of the most common cases in the application of recommender systems. Generally speaking, interacted examples are considered as positive while negative examples are sampled from uninteracted ones. However, noisy examples are prevalent in real-world implicit feedback. A noisy positive example could be interacted but it actually leads to negative user preference. A noisy negative example which is uninteracted because of unawareness of the user could also denote potential positive user preference. Conventional training methods overlook these noisy examples, leading to sub-optimal recommendations. In this work, we propose a novel framework to learn robust recommenders from implicit feedback. Through an empirical study, we find that different models make relatively similar predictions on clean examples which denote the real user preference, while the predictions on noisy examples vary much more across different models. Motivated by this observation, we propose denoising with cross-model agreement(DeCA) which aims to minimize the KL-divergence between the real user preference distributions parameterized by two recommendation models while maximizing the likelihood of data observation. We employ the proposed DeCA on four state-of-the-art recommendation models and conduct experiments on four datasets. Experimental results demonstrate that DeCA significantly improves recommendation performance compared with normal training and other denoising methods. Codes will be open-sourced.
翻译:从隐性反馈中学习隐性反馈是应用推荐者系统最常见的例子之一。一般而言,互动实例被认为是积极的,而负面实例则来自非互动的样本。然而,在现实世界的隐含反馈中普遍存在吵闹的例子。一个吵闹的积极实例可以互动,但实际上导致用户偏好。一个因用户不知道而没有互动的吵闹的负面实例也可以表示潜在的积极用户偏好。常规培训方法忽略了这些吵闹的例子,导致得出次优建议。在这项工作中,我们提出了一个新框架,从隐含的反馈中学习强有力的推荐者。我们通过经验研究发现,不同模型对表明真正用户偏好的真实例子作出相对相似的预测,而关于吵闹的例子的预测在不同模式中差异更大。我们提议与跨模式协议(DeCA)进行分流,该模式旨在尽量减少两个建议模型所比较的真正用户偏好分布参数之间的宽度,同时尽量扩大数据观测的可能性。我们采用拟议的DeCA对四个状态的预测作了类似的预测。我们用四个状态的实验性模型来比较常规数据模型,将大大地改进其他数据格式。