We study a cooperative multi-agent multi-armed bandits with M agents and K arms. The goal of the agents is to minimized the cumulative regret. We adapt a traditional Thompson Sampling algoirthm under the distributed setting. However, with agent's ability to communicate, we note that communication may further reduce the upper bound of the regret for a distributed Thompson Sampling approach. To further improve the performance of distributed Thompson Sampling, we propose a distributed Elimination based Thompson Sampling algorithm that allow the agents to learn collaboratively. We analyse the algorithm under Bernoulli reward and derived a problem dependent upper bound on the cumulative regret.
翻译:我们用M剂和K武器研究一个多剂多武装合作匪徒,目的是尽量减少累积的遗憾。我们根据分布式环境对传统的Thompson取样器进行修改。然而,由于代理人的通讯能力,我们注意到,通信可能会进一步减少分发Thompson取样法的遗憾上限。为了进一步改善分发的Thompson取样法的绩效,我们提议一个分布式的基于Exprise Thompson取样法,使代理人能够合作学习。我们分析了Bernoulli奖项下的算法,并得出了一个取决于累积遗憾的问题。