Value factorisation is a useful technique for multi-agent reinforcement learning (MARL) in global reward game, however its underlying mechanism is not yet fully understood. This paper studies a theoretical framework for value factorisation with interpretability via Shapley value theory. We generalise Shapley value to Markov convex game called Markov Shapley value (MSV) and apply it as a value factorisation method in global reward game, which is obtained by the equivalence between the two games. Based on the properties of MSV, we derive Shapley-Bellman optimality equation (SBOE) to evaluate the optimal MSV, which corresponds to an optimal joint deterministic policy. Furthermore, we propose Shapley-Bellman operator (SBO) that is proved to solve SBOE. With a stochastic approximation and some transformations, a new MARL algorithm called Shapley Q-learning (SHAQ) is established, the implementation of which is guided by the theoretical results of SBO and MSV. We also discuss the relationship between SHAQ and relevant value factorisation methods. In the experiments, SHAQ exhibits not only superior performances on all tasks but also the interpretability that agrees with the theoretical analysis. The implementation of this paper is on https://github.com/hsvgbkhgbv/shapley-q-learning.
翻译:价值因素是全球奖励游戏中多试剂强化学习(MARL)的有用技术,然而,其基本机制尚未完全理解。本文研究一个通过Shapley 价值理论理论解释的价值观因素化理论框架。我们对Markov convex游戏的 Markov Shapley 价值(MSV) 作了概括化的Shapley 价值,并将其用作全球奖励游戏的一种价值因素化方法,该方法是通过两个游戏之间的等值获得的。根据MSV的特性,我们得出Shapley-Bellman最佳公式(SBOE),以评价最佳的MSV(SBOE),该公式与最佳的联合确定政策相对应。此外,我们建议Shapley-Bellman 操作员(SBOO) 的理论框架框架框架框架框架(SBOOO) 能够证明可以解决 SBOE 。我们用一个随机近似近似和某些变体来建立新的MAL算法,称为Shaple Q Q 学习(SHOV) 的理论结果。我们还讨论SHAQ(SHAQ) 和相关的高级分析。SHQ) 。SHAQ(L) 和SLialvibb) 也同意所有的操作。