We consider a fully decentralized multi-player stochastic multi-armed bandit setting where the players cannot communicate with each other and can observe only their own actions and rewards. The environment may appear differently to different players, $\textit{i.e.}$, the reward distributions for a given arm are heterogeneous across players. In the case of a collision (when more than one player plays the same arm), we allow for the colliding players to receive non-zero rewards. The time-horizon $T$ for which the arms are played is \emph{not} known to the players. Within this setup, where the number of players is allowed to be greater than the number of arms, we present a policy that achieves near order-optimal expected regret of order $O(\log^{1 + \delta} T)$ for some $0 < \delta < 1$ over a time-horizon of duration $T$. This paper is accepted at IEEE Transactions on Information Theory.
翻译:我们考虑的是完全分散的多玩家多武装强盗环境,玩家无法相互交流,只能观察他们自己的行动和奖励。环境对不同的玩家来说可能不同,$\textit{i.e.}$,给一个手臂的奖赏分布在玩家之间是各式各样的。如果发生碰撞(当不止一个玩家玩同一个手臂时),我们允许对对撞玩家获得非零奖赏。玩家们所知道的时间高度为$T$T$。在这个环境里,允许玩家的数量超过武器的数量,我们提出了一个政策,在信息理论的IEE交易中实现约0美元 <\delta < 1美元 > 的时间段。