Reinforcement learning systems will to a greater and greater extent make decisions that significantly impact the well-being of humans, and it is therefore essential that these systems make decisions that conform to our expectations of morally good behavior. The morally good is often defined in causal terms, as in whether one's actions have in fact caused a particular outcome, and whether the outcome could have been anticipated. We propose an online reinforcement learning method that learns a policy under the constraint that the agent should not be the cause of harm. This is accomplished by defining cause using the theory of actual causation and assigning blame to the agent when its actions are the actual cause of an undesirable outcome. We conduct experiments on a toy ethical dilemma in which a natural choice of reward function leads to clearly undesirable behavior, but our method learns a policy that avoids being the cause of harmful behavior, demonstrating the soundness of our approach. Allowing an agent to learn while observing causal moral distinctions such as blame, opens the possibility to learning policies that better conform to our moral judgments.
翻译:强化学习系统将越来越深入地做出影响人类福祉的决策,因此,这些系统必须做出符合我们对道德良好行为期望的决定。 道德良好常常以因果界定,例如,一个人的行为是否实际上造成了特定结果,以及结果是否是可以预期到的。 我们建议一种在线强化学习方法,在限制下学习一种政策,即代理人不应造成伤害。这是通过使用实际因果关系理论来界定原因,并在代理人的行为是不良结果的实际原因时将责任归咎于代理人来实现的。 我们实验一种微乎其微的道德困境,在这种困境中,自然选择奖励功能会导致明显不可取的行为,但我们的方法学会了避免造成有害行为的政策,显示了我们的方法的正确性。允许代理人在遵守因果道德区别,例如指责的同时学习,为学习更符合我们道德判断的政策提供了可能性。