Models of human behavior for prediction and collaboration tend to fall into two categories: ones that learn from large amounts of data via imitation learning, and ones that assume human behavior to be noisily-optimal for some reward function. The former are very useful, but only when it is possible to gather a lot of human data in the target environment and distribution. The advantage of the latter type, which includes Boltzmann rationality, is the ability to make accurate predictions in new environments without extensive data when humans are actually close to optimal. However, these models fail when humans exhibit systematic suboptimality, i.e. when their deviations from optimal behavior are not independent, but instead consistent over time. Our key insight is that systematic suboptimality can be modeled by predicting policies, which couple action choices over time, instead of trajectories. We introduce the Boltzmann policy distribution (BPD), which serves as a prior over human policies and adapts via Bayesian inference to capture systematic deviations by observing human actions during a single episode. The BPD is difficult to compute and represent because policies lie in a high-dimensional continuous space, but we leverage tools from generative and sequence models to enable efficient sampling and inference. We show that the BPD enables prediction of human behavior and human-AI collaboration equally as well as imitation learning-based human models while using far less data.
翻译:人类预测和合作行为模型往往分为两类:通过模仿学习从大量数据中学习的模型,通过模仿学习而学习大量数据,以及假设人类行为对某种奖赏功能而言是新颖最佳的模型。前者非常有用,但只有在有可能在目标环境和分布中收集大量人类数据时,才有用。后者的优势包括博尔茨曼理性,在于能够在没有广泛数据的情况下在新的环境中作出准确的预测,而当人类实际上接近于最佳时。然而,这些模型在人类展示系统性的次优性时,即当人类行为偏离最佳行为并不独立,而是在时间上一致时,则会失败。我们的主要见解是,系统性的次优性可以通过预测政策来模型,在目标环境和分布上收集许多行动选择,而不是在轨迹上。我们引入了博尔茨曼政策分布(BPD),这是人类政策的一个先入手,通过巴耶西亚的推论来通过观察人类的一集中的行为来捕捉取系统的偏差。BPPD是难以理解和体现的,因为政策在高维度的模型中,我们利用了高维度的模型和远维度的模型,从而展示了人类持续的模型,我们能够利用了人类的模型,使人类的模拟的模拟的模型进行持续的人类的模型。