An ultimate goal of recommender systems is to improve user engagement. Reinforcement learning (RL) is a promising paradigm for this goal, as it directly optimizes overall performance of sequential recommendation. However, many existing RL-based approaches induce huge computational overhead, because they require not only the recommended items but also all other candidate items to be stored. This paper proposes an efficient alternative that does not require the candidate items. The idea is to model the correlation between user engagement and items directly from data. Moreover, the proposed approach consider randomness in user feedback and termination behavior, which are ubiquitous for RS but rarely discussed in RL-based prior work. With online A/B experiments on real-world RS, we confirm the efficacy of the proposed approach and the importance of modeling the two types of randomness.
翻译:推荐者系统的最终目标是提高用户的参与程度。强化学习(RL)是实现这一目标的一个有希望的范例,因为它直接优化了顺序建议的总体绩效。然而,许多基于RL的现有方法引起了巨大的计算间接费用,因为它们不仅需要建议的项目,而且需要储存所有其他候选项目。本文件提出了一个不需要候选项目的有效备选方案。设想是模拟用户参与和直接从数据中获取项目之间的相互关系。此外,拟议方法还考虑到用户反馈和终止行为中的随机性,对于塞族共和国来说,这种随机性和终止行为无处不在,但在基于RL的先前工作中很少讨论。在实际世界RS的A/B在线实验中,我们确认了拟议方法的有效性以及两种随机性模式的重要性。