Reward learning algorithms utilize human feedback to infer a reward function, which is then used to train an AI system. This human feedback is often a preference comparison, in which the human teacher compares several samples of AI behavior and chooses which they believe best accomplishes the objective. While reward learning typically assumes that all feedback comes from a single teacher, in practice these systems often query multiple teachers to gather sufficient training data. In this paper, we investigate this disparity, and find that algorithmic evaluation of these different sources of feedback facilitates more accurate and efficient reward learning. We formally analyze the value of information (VOI) when reward learning from teachers with varying levels of rationality, and define and evaluate an algorithm that utilizes this VOI to actively select teachers to query for feedback. Surprisingly, we find that it is often more informative to query comparatively irrational teachers. By formalizing this problem and deriving an analytical solution, we hope to facilitate improvement in reward learning approaches to aligning AI behavior with human values.
翻译:奖励性学习算法利用人类反馈来推断奖赏功能,然后用于培训一个AI系统。这种人类反馈往往是一种偏好比较,在这种比较中,人类教师比较了一些AI行为样本,并选择了他们认为最能实现目标的选择。虽然奖励性学习通常假定所有反馈都来自一位教师,但实际上,这些系统往往询问多位教师以收集足够的培训数据。在这份文件中,我们调查这一差异,发现对这些不同反馈来源的算法评估有助于更准确和高效的奖赏学习。我们正式分析信息的价值(VOI),以奖励不同水平的教师学习,并界定和评价一种算法,利用这种算法积极挑选教师进行反馈。令人惊讶的是,我们发现询问相对不合理的教师往往更具有启发性。通过将这一问题正规化和得出分析解决方案,我们希望促进改进学习方法,使AI行为与人类价值观相一致。</s>