Reinforcement learning (RL) has shown promise as a tool for engineering safe, ethical, or legal behaviour in autonomous agents. Its use typically relies on assigning punishments to state-action pairs that constitute unsafe or unethical choices. Despite this assignment being a crucial step in this approach, however, there has been limited discussion on generalizing the process of selecting punishments and deciding where to apply them. In this paper, we adopt an approach that leverages an existing framework -- the normative supervisor of (Neufeld et al., 2021) -- during training. This normative supervisor is used to dynamically translate states and the applicable normative system into defeasible deontic logic theories, feed these theories to a theorem prover, and use the conclusions derived to decide whether or not to assign a punishment to the agent. We use multi-objective RL (MORL) to balance the ethical objective of avoiding violations with a non-ethical objective; we will demonstrate that our approach works for a multiplicity of MORL techniques, and show that it is effective regardless of the magnitude of the punishment we assign.
翻译:强化学习(RL)作为自主代理人安全、道德或法律行为的一种工程工具,显示了希望(RL)作为自主代理人安全、道德或法律行为的一种工具,其使用通常依赖于对构成不安全或不道德选择的州际行动双方给予惩罚。尽管这一任务是这一方法的关键步骤,但对于一般选择惩罚和决定如何适用惩罚的程序的讨论有限。在本文件中,我们采取了一种在培训期间利用现有框架的方法 -- -- Neufeld的规范监督员(Neufeld等人,2021年) -- -- 的规范监督员被用来将国家和适用的规范制度动态地转化为不可行的不现实逻辑理论,将这些理论提供给理论验证者,并利用得出的结论决定是否将惩罚分配给代理人。我们使用多目标RL(MOL)来平衡避免违法行为的道德目标和非道德目标;我们将证明我们的方法有助于多种最低运作力技术,并表明不管我们所分配的惩罚程度如何,它都是有效的。