Current advances in recommender systems have been remarkably successful in optimizing immediate engagement. However, long-term user engagement, a more desirable performance metric, remains difficult to improve. Meanwhile, recent reinforcement learning (RL) algorithms have shown their effectiveness in a variety of long-term goal optimization tasks. For this reason, RL is widely considered as a promising framework for optimizing long-term user engagement in recommendation. Despite being a promising approach, the application of RL heavily relies on well-designed rewards, but designing rewards related to long-term user engagement is quite difficult. To mitigate the problem, we propose a novel paradigm, Preference-based Recommender systems (PrefRec), which allows RL recommender systems to learn from preferences about users' historical behaviors rather than explicitly defined rewards. Such preferences are easily accessible through techniques such as crowdsourcing, as they do not require any expert knowledge. With PrefRec, we can fully exploit the advantages of RL in optimizing long-term goals, while avoiding complex reward engineering. PrefRec uses the preferences to automatically train a reward function in an end-to-end manner. The reward function is then used to generate learning signals to train the recommendation policy. Furthermore, we design an effective optimization method for PrefRec, which uses an additional value function, expectile regression and reward model pre-training to improve the performance. Extensive experiments are conducted on a variety of long-term user engagement optimization tasks. The results show that PrefRec significantly outperforms previous state-of-the-art methods in all the tasks.
翻译:目前建议者系统的进展在优化直接接触方面非常成功,然而,长期用户参与这一更可取的业绩衡量标准仍然难以改进。与此同时,最近的强化学习(RL)算法在各种长期目标优化任务中显示出其有效性。为此,人们广泛认为RL是优化长期用户参与建议工作的一个大有希望的框架。尽管这一方法很有希望,但应用RL在很大程度上依赖于设计良好的奖励,但设计与长期用户参与有关的奖励是非常困难的。为了缓解问题,我们提出了一个创新范例,即基于参考的推荐者系统(PrefRec),允许RL推荐者系统从用户历史行为的偏好中学习,而不是明确界定的奖励。因此,这种偏好通过众包等技术很容易获得,因为它们并不需要任何专业知识。PrefRec,我们可以充分利用RLL在优化长期目标方面的优势,同时避免复杂的奖励工程。refRec利用所有偏好的方式自动培训奖赏功能,即以最终到最后的方式,使RefRec建议者系统能够从用户的偏爱中学习关于用户历史行为的选择,然后利用奖励功能来大幅地展示一种最优化的政策。