From the earliest years of our lives, humans use language to express our beliefs and desires. Being able to talk to artificial agents about our preferences would thus fulfill a central goal of value alignment. Yet today, we lack computational models explaining such flexible and abstract language use. To address this challenge, we consider social learning in a linear bandit setting and ask how a human might communicate preferences over behaviors (i.e. the reward function). We study two distinct types of language: instructions, which provide information about the desired policy, and descriptions, which provide information about the reward function. To explain how humans use these forms of language, we suggest they reason about both known present and unknown future states: instructions optimize for the present, while descriptions generalize to the future. We formalize this choice by extending reward design to consider a distribution over states. We then define a pragmatic listener agent that infers the speaker's reward function by reasoning about how the speaker expresses themselves. We validate our models with a behavioral experiment, demonstrating that (1) our speaker model predicts spontaneous human behavior, and (2) our pragmatic listener is able to recover their reward functions. Finally, we show that in traditional reinforcement learning settings, pragmatic social learning can integrate with and accelerate individual learning. Our findings suggest that social learning from a wider range of language -- in particular, expanding the field's present focus on instructions to include learning from descriptions -- is a promising approach for value alignment and reinforcement learning more broadly.
翻译:从我们生命的最初几年起,人类就用语言表达我们的信仰和愿望。 能够与人造代理人谈论我们的偏好, 从而实现价值调整的中心目标。 然而今天,我们缺乏解释这种灵活和抽象语言使用的计算模型。 为了应对这一挑战,我们考虑在线性土匪环境中进行社会学习,并询问一个人如何能传达偏爱行为( 即奖赏功能) 。 我们研究两种截然不同的语言: 提供指导,提供有关理想政策的信息,描述,提供关于奖赏功能的信息。 为了解释人类如何使用这些形式的语言,我们建议他们理解已知的当前和未知的未来状态: 优化当前的指示,同时将描述概括到未来。 我们通过扩大奖励设计来考虑各州的分布,从而正式确定这一选择。 然后我们定义一个务实的倾听者,通过解释演讲人表达自己的方式来推导出奖赏功能。 我们用一种行为上的实验来验证我们的模型, 表明(1) 我们的演讲者模型预测了自发性的人类行为, 和(2) 我们的务实倾听者能够恢复他们的奖赏功能。 最后,我们用传统强化的学习模式, 包括扩大社会学习范围。