Most of the existing deep reinforcement learning (RL) approaches for session-based recommendations either rely on costly online interactions with real users, or rely on potentially biased rule-based or data-driven user-behavior models for learning. In this work, we instead focus on learning recommendation policies in the pure batch or offline setting, i.e. learning policies solely from offline historical interaction logs or batch data generated from an unknown and sub-optimal behavior policy, without further access to data from the real-world or user-behavior models. We propose BCD4Rec: Batch-Constrained Distributional RL for Session-based Recommendations. BCD4Rec builds upon the recent advances in batch (offline) RL and distributional RL to learn from offline logs while dealing with the intrinsically stochastic nature of rewards from the users due to varied latent interest preferences (environments). We demonstrate that BCD4Rec significantly improves upon the behavior policy as well as strong RL and non-RL baselines in the batch setting in terms of standard performance metrics like Click Through Rates or Buy Rates. Other useful properties of BCD4Rec include: i. recommending items from the correct latent categories indicating better value estimates despite large action space (of the order of number of items), and ii. overcoming popularity bias in clicked or bought items typically present in the offline logs.
翻译:在这项工作中,我们侧重于在纯粹的批量或离线设置中学习建议政策,即仅仅从离线历史互动日志或从未知和亚最佳行为政策产生的批量数据学习政策,而不能进一步获取来自真实世界或用户行为模式的数据。我们提议BCD4Rec:为会议建议而批次培训分发RL。BCD4Rec以最近批次(离线)RL和分配RL的进展为基础,从离线日志中学习,同时处理用户因各种潜在兴趣偏好(环境)而产生的奖赏的内在可选性。我们证明,BCD4Rec大大改进了行为政策,以及强大的RL和非RCD基线,在标准性能指标的批量设置方面,例如,通过标准性能(惯用率)RL和分配RL),在标准性能4中(通常通过标准性能或购买性能指标项目中,从大额记录项目中,包括显示正在获取的BRL值。