Recent advances in recommender systems have proved the potential of Reinforcement Learning (RL) to handle the dynamic evolution processes between users and recommender systems. However, learning to train an optimal RL agent is generally impractical with commonly sparse user feedback data in the context of recommender systems. To circumvent the lack of interaction of current RL-based recommender systems, we propose to learn a general Model-Agnostic Counterfactual Synthesis (MACS) Policy for counterfactual user interaction data augmentation. The counterfactual synthesis policy aims to synthesise counterfactual states while preserving significant information in the original state relevant to the user's interests, building upon two different training approaches we designed: learning with expert demonstrations and joint training. As a result, the synthesis of each counterfactual data is based on the current recommendation agent's interaction with the environment to adapt to users' dynamic interests. We integrate the proposed policy Deep Deterministic Policy Gradient (DDPG), Soft Actor Critic (SAC) and Twin Delayed DDPG in an adaptive pipeline with a recommendation agent that can generate counterfactual data to improve the performance of recommendation. The empirical results on both online simulation and offline datasets demonstrate the effectiveness and generalisation of our counterfactual synthesis policy and verify that it improves the performance of RL recommendation agents.
翻译:在建议系统范围内,学习培训最佳RL代理器通常不切实际,因为普遍缺乏用户反馈数据;为避免目前基于RL的推荐系统缺乏互动,我们提议学习关于反事实用户互动数据增强的模型-分析反事实综合(MACS)政策;反事实综合政策旨在综合反事实状态,同时保留与用户利益相关的初始状态的重要信息,同时以我们设计的两个不同培训方法为基础:学习专家演示和联合培训;因此,每种反事实数据的综合都以目前建议代理商与环境的互动为基础,以适应用户的动态利益;我们将拟议的政策深入威慑政策梯度(DPG)、Soft Acor Critic(SAC)和Twin Delaed DDPG纳入适应性管道,与一个能够生成反事实数据以改进建议执行情况的建议代理商一道,通过在线模拟和反现实政策化测试结果,并用在线测试和反数据代理商验证其业绩。