Real-world robotic tasks require complex reward functions. When we define the problem the robot needs to solve, we pretend that a designer specifies this complex reward exactly, and it is set in stone from then on. In practice, however, reward design is an iterative process: the designer chooses a reward, eventually encounters an "edge-case" environment where the reward incentivizes the wrong behavior, revises the reward, and repeats. What would it mean to rethink robotics problems to formally account for this iterative nature of reward design? We propose that the robot not take the specified reward for granted, but rather have uncertainty about it, and account for the future design iterations as future evidence. We contribute an Assisted Reward Design method that speeds up the design process by anticipating and influencing this future evidence: rather than letting the designer eventually encounter failure cases and revise the reward then, the method actively exposes the designer to such environments during the development phase. We test this method in a simplified autonomous driving task and find that it more quickly improves the car's behavior in held-out environments by proposing environments that are "edge cases" for the current reward.
翻译:现实世界的机器人任务需要复杂的奖赏功能。 当我们定义机器人需要解决的问题时, 我们假装一个设计者具体指定了这种复杂的奖赏, 并且从那时起就以石头形式设置。 但是, 实际上, 奖赏设计是一个反复的过程: 设计者选择奖赏, 最终遇到一个“ 尖端” 环境, 奖赏激励错误的行为, 修改奖赏, 重复。 重新思考机器人问题意味着什么, 正式解释奖赏设计的这种迭接性。 我们建议机器人不要将特定奖赏视为理所当然的, 而是对它有不确定性, 并且将未来的设计变换作为未来的证据。 我们提供了一种辅助性的奖赏设计方法, 通过预测和影响这一未来的证据来加速设计过程: 而不是让设计者最终遇到失败案例, 然后修改奖赏。 我们用的方法在开发阶段将设计者暴露于这种环境。 我们在一个简化的自主驾驶任务中测试这个方法, 并发现它能够更快地改进被搁置的环境下的汽车行为, 方法是提出“ 边缘案例 ” 。