Recommender systems are often susceptible to well-crafted fake profiles, leading to biased recommendations. The wide application of recommender systems makes studying the defense against attack necessary. Among existing defense methods, data-processing-based methods inevitably exclude normal samples, while model-based methods struggle to enjoy both generalization and robustness. Considering the above limitations, we suggest integrating data processing and robust model and propose a general framework, Triple Cooperative Defense (TCD), which cooperates to improve model robustness through the co-training of three models. Specifically, in each round of training, we sequentially use the high-confidence prediction ratings (consistent ratings) of any two models as auxiliary training data for the remaining model, and the three models cooperatively improve recommendation robustness. Notably, TCD adds pseudo label data instead of deleting abnormal data, which avoids the cleaning of normal data, and the cooperative training of the three models is also beneficial to model generalization. Through extensive experiments with five poisoning attacks on three real-world datasets, the results show that the robustness improvement of TCD significantly outperforms baselines. It is worth mentioning that TCD is also beneficial for model generalizations.
翻译:推荐人系统往往容易出现设计完善的假配置,从而导致偏颇的建议。广泛应用推荐人系统使得有必要对攻击进行防御研究。在现有的防御方法中,基于数据处理的方法不可避免地排除了正常样本,而基于模型的方法则努力享受一般化和稳健性。考虑到上述限制,我们建议将数据处理和稳健模型结合起来,并提出一个总体框架,即三重合作防御(TCD),通过对三个模型进行共同培训,合作提高模型的稳健性。具体地说,在每一轮培训中,我们相继使用任何两个模型的高自信预测评级(一致评级)作为剩余模型的辅助培训数据,而三个模型则合作改进建议稳健性。值得注意的是,TCD增加了假标签数据,而不是删除异常数据,避免了正常数据的清洁,而三个模型的合作培训也有助于模型的典型化。通过对三个真实世界数据集进行五次中毒袭击的广泛实验,结果显示,TCD的稳健性改进明显符合基准。值得一提的是,TCD还有利于模型的一般化。