Graph Neural Networks (GNNs) provide powerful representations for recommendation tasks. GNN-based recommendation systems capture the complex high-order connectivity between users and items by aggregating information from distant neighbors and can improve the performance of recommender systems. Recently, Knowledge Graphs (KGs) have also been incorporated into the user-item interaction graph to provide more abundant contextual information; they are exploited to address cold-start problems and enable more explainable aggregation in GNN-based recommender systems (GNN-Rs). However, due to the heterogeneous nature of users and items, developing an effective aggregation strategy that works across multiple GNN-Rs, such as LightGCN and KGAT, remains a challenge. In this paper, we propose a novel reinforcement learning-based message passing framework for recommender systems, which we call DPAO (Dual Policy framework for Aggregation Optimization). This framework adaptively determines high-order connectivity to aggregate users and items using dual policy learning. Dual policy learning leverages two Deep-Q-Network models to exploit the user- and item-aware feedback from a GNN-R and boost the performance of the target GNN-R. Our proposed framework was evaluated with both non-KG-based and KG-based GNN-R models on six real-world datasets, and their results show that our proposed framework significantly enhances the recent base model, improving nDCG and Recall by up to 63.7% and 42.9%, respectively. Our implementation code is available at https://github.com/steve30572/DPAO/.
翻译:以GNN为基础的建议系统收集用户和项目之间复杂的高端连通性,汇集远邻的信息,可以改进推荐系统的业绩。最近,知识图(KGs)也被纳入用户项目互动图,以提供更丰富的背景信息;它们被用来解决冷启动问题,并使得基于GNN的推荐系统(GNN-Rs)中更能解释的聚合。然而,由于用户和项目的多样性,制定有效的组合战略,在多个GNNR-R(如LightGCN和KGAT)之间发挥作用,这仍然是一个挑战。在本文件中,我们提议为推荐系统建立一个新的强化学习信息传递框架,我们称之为DPAO(Agrication Opimization的多级政策框架)。这个框架通过双轨政策学习模式确定与综合用户和项目(GNNNN)的高度连通性连接。由于用户和项目的不同性质,两个深Q网络模型,利用GNNNR-R90的用户和项目反馈,并在G-NNDS-G的近期框架中大大提升了我们现有的G-M-M-C-C-C-C-C-C-C-S-C-C-S-S-S-S-S-S-I-I-I-S-S-SU-SU-I-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-I-S-S-S-S-S-S-S-S-I-I-I-I-I-I-I-S-S-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I-I