This paper tackles a multi-agent bandit setting where $M$ agents cooperate together to solve the same instance of a $K$-armed stochastic bandit problem. The agents are \textit{heterogeneous}: each agent has limited access to a local subset of arms and the agents are asynchronous with different gaps between decision-making rounds. The goal for each agent is to find its optimal local arm, and agents can cooperate by sharing their observations with others. While cooperation between agents improves the performance of learning, it comes with an additional complexity of communication between agents. For this heterogeneous multi-agent setting, we propose two learning algorithms, \ucbo and \AAE. We prove that both algorithms achieve order-optimal regret, which is $O\left(\sum_{i:\tilde{\Delta}_i>0} \log T/\tilde{\Delta}_i\right)$, where $\tilde{\Delta}_i$ is the minimum suboptimality gap between the reward mean of arm $i$ and any local optimal arm. In addition, a careful selection of the valuable information for cooperation, \AAE achieves a low communication complexity of $O(\log T)$. Last, numerical experiments verify the efficiency of both algorithms.
翻译:本文处理一个多试剂土匪设置, 由M$代理商合作解决相同的情况。 代理商是 k$- 武装的土匪问题。 代理商是\ textit{ 遗传性} : 每个代理商对本地武器子集的接触有限, 代理商对决策回合之间的不同差距是一样同步的。 每个代理商的目标是找到其最佳的本地手臂, 代理商可以通过与他人分享其观测结果进行合作。 虽然代理商之间的合作提高了学习绩效, 但代理商之间的沟通又增加了复杂性。 对于这个多试剂混合的设置, 我们建议两种学习算法, 即 \ ucbo 和\ AAE。 我们证明这两种算法都实现了定序- 最佳的遗憾, 也就是 $left( sum ⁇ i:\ tilde= Delta ⁇ >0} (log T/\ tilde_ delta\ i\ right) $, 代理商可以合作。 其中, $\ delta $ i 。 在代理商之间的沟通中, 一个最小的亚优度差距是 $ $ $ $ 和任何本地的精度 A 精度 A 。