Bandit learning algorithms have been an increasingly popular design choice for recommender systems. Despite the strong interest in bandit learning from the community, there remains multiple bottlenecks that prevent many bandit learning approaches from productionalization. Two of the most important bottlenecks are scaling to multi-task and A/B testing. Classic bandit algorithms, especially those leveraging contextual information, often requires reward for uncertainty estimation, which hinders their adoptions in multi-task recommender systems. Moreover, different from supervised learning algorithms, bandit learning algorithms emphasize greatly on the data collection process through their explorative nature. Such explorative behavior induces unfair evaluation for bandit learning agents in a classic A/B test setting. In this work, we present a novel design of production bandit learning life-cycle for recommender systems, along with a novel set of metrics to measure their efficiency in user exploration. We show through large-scale production recommender system experiments and in-depth analysis that our bandit agent design improves personalization for the production recommender system and our experiment design fairly evaluates the performance of bandit learning algorithms.
翻译:基于贝叶斯赌博算法的推荐系统设计在学术界受到广泛关注,但其中存在一些瓶颈,导致很难将其应用于生产上。导致这些瓶颈的主要原因包括多任务推荐系统的可扩展性和 A/B 测试。经典的赌博算法,特别是那些利用上下文信息的算法,通常需要奖励才能评估不确定性,这阻碍了将这些算法应用于多任务推荐系统上。此外,与监督式学习算法不同,赌博学习算法强调通过他们的探索性行为进行数据收集。这种探索性行为在经典的 A/B 测试环境中不公平地影响了赌博学习代理的评估。在本研究中,我们提出了一种新型的推荐系统赌博学习生命周期设计,并使用一套新的指标来衡量它们在用户探索方面的效率。通过大规模的生产推荐系统实验和深入分析,我们证明了我们的赌博代理设计提高了生产推荐系统的个性化,并且我们的实验设计公平地评估了赌博学习算法的性能。