As one of the solutions to the Dec-POMDP problem, the value decomposition method has achieved good results recently. However, most value decomposition methods require the global state during training, but this is not feasible in some scenarios where the global state cannot be obtained. Therefore, we propose a novel value decomposition framework, named State Inference for value DEcomposition (SIDE), which eliminates the need to know the true state by simultaneously seeking solutions to the two problems of optimal control and state inference. SIDE can be extended to any value decomposition method, as well as other types of multi-agent algorithms in the case of Dec-POMDP. Based on the performance results of different algorithms in Starcraft II micromanagement tasks, we verified that SIDE can construct the current state that contributes to the reinforcement learning process based on past local observations.
翻译:作为解决Dec-POMDP问题的办法之一,价值分解方法最近取得了良好结果,然而,大多数价值分解方法要求培训期间采用全球状态,但在某些无法获得全球状态的情况下,这不可行。因此,我们提议一个新的价值分解框架,即称为国家分解法(SIDE),它消除了通过同时寻求最佳控制和国家推断这两个问题的解决办法了解真实状态的必要性。 SIDE可以扩展到任何价值分解方法,以及Dec-POMDP的其他类型的多试剂算法。 根据Starcraft II微观管理任务中不同算法的性能结果,我们核实SIDE能够根据以往的当地观察,构建有助于强化学习进程的当前状态。