The standard formulation of Reinforcement Learning lacks a practical way of specifying what are admissible and forbidden behaviors. Most often, practitioners go about the task of behavior specification by manually engineering the reward function, a counter-intuitive process that requires several iterations and is prone to reward hacking by the agent. In this work, we argue that constrained RL, which has almost exclusively been used for safe RL, also has the potential to significantly reduce the amount of work spent for reward specification in applied Reinforcement Learning projects. To this end, we propose to specify behavioral preferences in the CMDP framework and to use Lagrangian methods, which seek to solve a min-max problem between the agent's policy and the Lagrangian multipliers, to automatically weigh each of the behavioral constraints. Specifically, we investigate how CMDPs can be adapted in order to solve goal-based tasks while adhering to a set of behavioral constraints and propose modifications to the SAC-Lagrangian algorithm to handle the challenging case of several constraints. We evaluate this framework on a set of continuous control tasks relevant to the application of Reinforcement Learning for NPC design in video games.
翻译:“强化学习”的标准提法缺乏具体说明哪些行为可以被接受和被禁止的实用方法。 通常,执业者通过手动设计奖赏功能来完成行为规范的任务,这是一个反直觉的过程,需要多次迭代,容易奖励代理人的黑客。 在这项工作中,我们争辩说,几乎完全用于安全的RL的受限RL还有可能大大减少应用强化学习项目中用于奖赏规格的工作量。 为此,我们提议在“强化学习”框架内具体规定行为偏好,并使用拉格朗加法,这种方法力求解决代理人政策和拉格朗加法之间的小问题,以便自动权衡每一种行为限制。具体地说,我们调查如何调整CMDP,以便在坚持一套行为限制的同时,解决基于目标的任务,并提议修改SAC-Lagrangian算法,以处理若干具有挑战性制约的案例。我们评估这一框架,以一套与在视频游戏中应用强化学习NPC设计“强化NPC”有关的连续控制任务。