Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedbacks. In particular, we introduce an online user-agent interacting environment simulator, which can pre-train and evaluate model parameters offline before applying the model online. Moreover, we validate the importance of list-wise recommendations during the interactions between users and agent, and develop a novel approach to incorporate them into the proposed framework LIRD for list-wide recommendations. The experimental results based on a real-world e-commerce dataset demonstrate the effectiveness of the proposed framework.
翻译:建议系统通过建议用户的个人化项目或服务,在减轻信息超载问题方面发挥着关键作用。绝大多数传统建议系统将建议程序视为静态程序,并根据固定战略提出建议。在本文件中,我们提出一个新的建议系统,能够在与用户互动期间不断改进其战略。我们将用户与推荐系统之间的相继互动作为Markov决策程序(MDP)的模型,并利用加强学习(RL),通过推荐试验和检查项目,从用户反馈中获取对这些项目的强化,自动学习最佳战略。特别是,我们引入了在线用户-代理互动环境模拟器,可以在在线应用模型之前预先培训并评估离线示范参数。此外,我们还验证了用户与代理之间互动过程中清单性建议的重要性,并开发了将这些建议纳入拟议的 LIRD框架以纳入全名单建议的新办法。基于现实世界电子商务数据集的实验结果显示了拟议框架的有效性。