Contextual multi-armed bandit has shown to be an effective tool in recommender systems. In this paper, we study a novel problem of multi-facet bandits involving a group of bandits, each characterizing the users' needs from one unique aspect. In each round, for the given user, we need to select one arm from each bandit, such that the combination of all arms maximizes the final reward. This problem can find immediate applications in E-commerce, healthcare, etc. To address this problem, we propose a novel algorithm, named MuFasa, which utilizes an assembled neural network to jointly learn the underlying reward functions of multiple bandits. It estimates an Upper Confidence Bound (UCB) linked with the expected reward to balance between exploitation and exploration. Under mild assumptions, we provide the regret analysis of MuFasa. It can achieve the near-optimal $\widetilde{ \mathcal{O}}((K+1)\sqrt{T})$ regret bound where $K$ is the number of bandits and $T$ is the number of played rounds. Furthermore, we conduct extensive experiments to show that MuFasa outperforms strong baselines on real-world data sets.
翻译:多臂多臂土匪是推荐人系统的有效工具。 在本文中,我们研究了一个涉及一群土匪的多面土匪的新问题,每个土匪都从一个独特的方面来说明用户的需要。 在每轮中,对于给定的用户,我们需要从每个土匪中选择一只手臂,这样所有武器的结合可以使最后的奖励最大化。这个问题可以在电子商务、医疗保健等领域找到直接应用。为了解决这个问题,我们提议了一个叫Mufasa的新式算法,它利用一个组装的神经网络来共同学习多个土匪的基本奖赏功能。它估计了一个与预期的开采和勘探之间的奖赏相挂钩的高度信任犬(UCB),在温和的假设下,我们提供了对Mufasa的遗憾分析。它能够实现接近最佳的 $\ plitilde{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\