In spite of the tremendous development of recommender system owing to the progressive capability of machine learning recently, the current recommender system is still vulnerable to the distribution shift of users and items in realistic scenarios, leading to the sharp decline of performance in testing environments. It is even more severe in many common applications where only the implicit feedback from sparse data is available. Hence, it is crucial to promote the performance stability of recommendation method in different environments. In this work, we first make a thorough analysis of implicit recommendation problem from the viewpoint of out-of-distribution (OOD) generalization. Then under the guidance of our theoretical analysis, we propose to incorporate the recommendation-specific DAG learner into a novel causal preference-based recommendation framework named CausPref, mainly consisting of causal learning of invariant user preference and anti-preference negative sampling to deal with implicit feedback. Extensive experimental results from real-world datasets clearly demonstrate that our approach surpasses the benchmark models significantly under types of out-of-distribution settings, and show its impressive interpretability.
翻译:尽管由于最近机器学习的渐进能力,建议系统有了巨大的发展,但目前的建议系统仍然容易受到用户和项目在现实情况下的分布转移的影响,导致测试环境的性能急剧下降,在许多普通应用程序中,只有稀少数据的隐含反馈,这种系统甚至更为严重,因此,在不同环境中促进建议方法的性能稳定性至关重要。在这项工作中,我们首先从分配外(OOOD)一般化的角度对隐含的建议问题进行彻底分析。然后,在理论分析的指导下,我们提议将建议专用DAG学习器纳入一个新的基于因果偏好的建议框架,称为CausPref,主要包括因果学习不同用户的偏好和反偏好负面抽样,以便处理隐含的反馈。来自真实世界数据集的广泛实验结果清楚地表明,我们的方法大大超过了在分配外环境类型下的基准模型,并显示其令人印象深刻的可解释性。