Reinforcement learning in complex environments may require supervision to prevent the agent from attempting dangerous actions. As a result of supervisor intervention, the executed action may differ from the action specified by the policy. How does this affect learning? We present the Modified-Action Markov Decision Process, an extension of the MDP model that allows actions to differ from the policy. We analyze the asymptotic behaviours of common reinforcement learning algorithms in this setting and show that they adapt in different ways: some completely ignore modifications while others go to various lengths in trying to avoid action modifications that decrease reward. By choosing the right algorithm, developers can prevent their agents from learning to circumvent interruptions or constraints, and better control agent responses to other kinds of action modification, like self-damage.
翻译:复杂环境中的强化学习可能需要监督,以防止代理人试图采取危险行动。 由于主管的干预, 执行的行动可能与政策规定的行动不同。 这如何影响学习? 我们提出修改- 行动 Markov 决策程序, 将MDP 模式扩展为允许行动与政策不同。 我们分析了在这一环境中常见强化学习算法的无约束行为, 并表明它们以不同的方式适应: 有些完全忽略了修改, 而另一些则以不同的方式试图避免行动修改来减少奖励。 开发者通过选择正确的算法可以阻止其代理人学习以绕过中断或限制, 并更好地控制代理人对其他类型的行动修改的反应, 比如自我损坏。