In many real-world applications, multiple agents seek to learn how to perform highly related yet slightly different tasks in an online bandit learning protocol. We formulate this problem as the $\epsilon$-multi-player multi-armed bandit problem, in which a set of players concurrently interact with a set of arms, and for each arm, the reward distributions for all players are similar but not necessarily identical. We develop an upper confidence bound-based algorithm, RobustAgg$(\epsilon)$, that adaptively aggregates rewards collected by different players. In the setting where an upper bound on the pairwise similarities of reward distributions between players is known, we achieve instance-dependent regret guarantees that depend on the amenability of information sharing across players. We complement these upper bounds with nearly matching lower bounds. In the setting where pairwise similarities are unknown, we provide a lower bound, as well as an algorithm that trades off minimax regret guarantees for adaptivity to unknown similarity structure.
翻译:在许多现实世界应用中, 多个代理商试图学习如何在网上土匪学习协议中执行高度关联但略有不同的任务。 我们将此问题表述为$\ epsilon$- multi player 多重武装土匪问题, 其中一组玩家同时与一组武器互动, 对于每只手臂, 所有玩家的奖赏分布相似, 但不一定相同。 我们开发了一个基于信任的上限约束算法, RobustAgg$ (\epsilon), 由不同玩家收集的适应性综合奖赏。 在已知玩家之间奖赏分配的对等相似之处的环境下, 我们实现依赖实例的遗憾保证, 这取决于玩家共享信息的可性。 我们用近乎匹配的较低界限来补充这些上层。 在对称相似性未知的环境下, 我们提供了一种较低的界限, 以及一种从微轴悔保证中交换适应性与未知的相似结构的算法。