This paper considers multi-agent reinforcement learning (MARL) where the rewards are received after delays and the delay time varies among agents. Based on the V-learning framework, this paper proposes MARL algorithms that efficiently deal with reward delays. When the delays are finite, our algorithm reaches a coarse correlated equilibrium (CCE) with rate $\tilde{\mathcal{O}}(\frac{H^3\sqrt{S\mathcal{T}_K}}{K}+\frac{H^3\sqrt{SA}}{\sqrt{K}})$ where $K$ is the number of episodes, $H$ is the planning horizon, $S$ is the size of the state space, $A$ is the size of the largest action space, and $\mathcal{T}_K$ is the measure of the total delay defined in the paper. Moreover, our algorithm can be extended to cases with infinite delays through a reward skipping scheme. It achieves convergence rate similar to the finite delay case.
翻译:本文考虑了多剂加固学习(MARL), 在延误后收到奖励, 代理商的延迟时间也不同。 根据V- 学习框架,本文件提出MARL算法, 有效处理奖励拖延。 如果延迟是有限的, 我们的算法达到粗化的关联平衡(CCCE ), 利率为$\ tilde\ mathcal{O} ((\ frac{H3\\ sqrt{H3\ sqrt{SAQQ}K}) 。 此外, 我们的算法可以通过奖励跳过计划, 扩大到有无限拖延的案例。 它的趋同率类似于有限延迟案例 。