We consider the nonstochastic multi-agent multi-armed bandit problem with agents collaborating via a communication network with delays. We show a lower bound for individual regret of all agents. We show that with suitable regularizers and communication protocols, a collaborative multi-agent \emph{follow-the-regularized-leader} (FTRL) algorithm has an individual regret upper bound that matches the lower bound up to a constant factor when the number of arms is large enough relative to degrees of agents in the communication graph. We also show that an FTRL algorithm with a suitable regularizer is regret optimal with respect to the scaling with the edge-delay parameter. We present numerical experiments validating our theoretical results and demonstrate cases when our algorithms outperform previously proposed algorithms.
翻译:我们认为,通过通信网络合作的代理商存在非随机多试剂多臂强盗问题,但拖延了。我们对所有代理商的个人悔恨度较低。我们显示,如果有合适的规范者和通信协议,合作的多试剂\emph{Follow-colent-leader}(FTRL)算法具有个人遗憾的上限,在与通信图中的代理商程度相比武器数量足够大时,与较低约束的固定系数相符。我们还表明,配有适当常规化器的FTRL算法对于使用边缘涂料参数进行缩放最为理想。我们展示了验证理论结果的数字实验,并展示了在我们的算法超过先前提议的算法时出现的情况。