This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify, in pure exploration, or play, in regret minimization, its optimal arm. The twist is that the optimal arm for each agent is the arm with largest expected mixed reward, where the mixed reward of an arm is a weighted sum of the rewards of this arm for all agents. This makes communication between agents often necessary. This general setting allows to recover and extend several recent models for collaborative bandit learning, including the recently proposed federated learning with personalization (Shi et al., 2021). In this paper, we provide new lower bounds on the sample complexity of pure exploration and on the regret. We then propose a near-optimal algorithm for pure exploration. This algorithm is based on phased elimination with two novel ingredients: a data-dependent sampling scheme within each phase, aimed at matching a relaxation of the lower bound.
翻译:本文介绍一个一般多剂强盗模式,其中每个代理人都面临一组有限的武器,并可以通过中央控制器与其他代理人交流,以便在纯粹的勘探中查明,或者在最遗憾的最小化的情况下确定其最佳手臂。转折是,每个代理人的最佳手臂是预期的最大混合奖赏的手臂,一个手臂的混合奖赏是所有代理人这一手臂的奖励的加权总和。这使得代理人之间经常有必要进行交流。这一总体环境允许恢复和扩展最近若干合作强盗学习的模式,包括最近提出的与个性化相结合的联邦学习模式(Shi等人,2021年)。在本文件中,我们对纯勘探的样本复杂性和遗憾提供了新的较低界限。然后,我们提出了一种接近最佳的纯勘探算法。这一算法的基础是分阶段消除两个新的要素:每个阶段中以数据为根据的抽样计划,目的是匹配较低约束的放松。