We study the decentralized multi-agent multi-armed bandit problem for agents that communicate with probability over a network defined by a $d$-regular graph. Every edge in the graph has probabilistic weight $p$ to account for the ($1\!-\!p$) probability of a communication link failure. At each time step, each agent chooses an arm and receives a numerical reward associated with the chosen arm. After each choice, each agent observes the last obtained reward of each of its neighbors with probability $p$. We propose a new Upper Confidence Bound (UCB) based algorithm and analyze how agent-based strategies contribute to minimizing group regret in this probabilistic communication setting. We provide theoretical guarantees that our algorithm outperforms state-of-the-art algorithms. We illustrate our results and validate the theoretical claims using numerical simulations.
翻译:我们研究分散的多试剂多武装匪徒问题,针对的是那些在以美元为例的网络上进行概率交流的代理人。图表的每个边缘都有概率权重($p$)来计算通信链接失败的概率($1\!-\\!p$!p$)。每个代理人每一步都选择一个手臂,并获得与所选手臂相关的数字奖励。在每次选择之后,每个代理人都观察每个邻居最后获得的奖励,概率为$p$。我们提议一个新的基于高信任的算法(UCB),并分析基于代理人的战略如何有助于最大限度地减少在这种概率通信设置中的集体遗憾。我们提供理论保证,我们的算法优于最先进的算法。我们用数字模拟来说明我们的结果并验证理论主张。