Existing approaches for generating human-aware agent behaviors have considered different measures of interpretability in isolation. Further, these measures have been studied under differing assumptions, thus precluding the possibility of designing a single framework that captures these measures under the same assumptions. In this paper, we present a unifying Bayesian framework that models a human observer's evolving beliefs about an agent and thereby define the problem of Generalized Human-Aware Planning. We will show that the definitions of interpretability measures like explicability, legibility and predictability from the prior literature fall out as special cases of our general framework. Through this framework, we also bring a previously ignored fact to light that the human-robot interactions are in effect open-world problems, particularly as a result of modeling the human's beliefs over the agent. Since the human may not only hold beliefs unknown to the agent but may also form new hypotheses about the agent when presented with novel or unexpected behaviors.
翻译:此外,在不同的假设下,对这些措施进行了研究,从而排除了设计一个单一框架来根据同样的假设来捕捉这些措施的可能性。在本文件中,我们提出了一个统一的贝叶西亚框架,用以模拟人类观察员对一个代理人的不断演变的信念,从而界定普遍化人类软件规划问题。我们将表明,诸如可解释性、可识别性和可预测性等可解释性措施的定义作为我们总体框架的特殊情况,会作为我们总体框架的特例而消失。我们通过这个框架,还揭示了一个以前被忽视的事实,即人类机器人的相互作用实际上是开放世界的问题,特别是由于模拟人类对代理人的信仰的结果。因为人类可能不仅持有代理人所不知道的信念,而且还可能形成关于代理人的新行为或意外行为的新的假设。