A practical challenge in reinforcement learning are combinatorial action spaces that make planning computationally demanding. For example, in cooperative multi-agent reinforcement learning, a potentially large number of agents jointly optimize a global reward function, which leads to a combinatorial blow-up in the action space by the number of agents. As a minimal requirement, we assume access to an argmax oracle that allows to efficiently compute the greedy policy for any Q-function in the model class. Building on recent work in planning with local access to a simulator and linear function approximation, we propose efficient algorithms for this setting that lead to polynomial compute and query complexity in all relevant problem parameters. For the special case where the feature decomposition is additive, we further improve the bounds and extend the results to the kernelized setting with an efficient algorithm.
翻译:强化学习的一个实际挑战是组合行动空间,使规划具有计算要求。例如,在合作性多试剂强化学习中,可能有大量的代理机构共同优化全球奖励功能,这导致由多个代理机构在行动空间中进行组合式打击。作为最起码的要求,我们假定可以进入一个能有效计算模型类中任何功能的贪婪政策的角形体。在利用当地模拟器和线性函数近似法进行规划的最近工作的基础上,我们为这一设置提出有效的算法,导致所有相关问题参数的多元计算和查询复杂性。对于特征分解为添加的特例,我们进一步改进界限,并以高效算法将结果推广到内核环境。