We consider the human-aware task planning problem where a human-robot team is given a shared task with a known objective to achieve. Recent approaches tackle it by modeling it as a team of independent, rational agents, where the robot plans for both agents' (shared) tasks. However, the robot knows that humans cannot be administered like artificial agents, so it emulates and predicts the human's decisions, actions, and reactions. Based on earlier approaches, we describe a novel approach to solve such problems, which models and uses execution-time observability conventions. Abstractly, this modeling is based on situation assessment, which helps our approach capture the evolution of individual agents' beliefs and anticipate belief divergences that arise in practice. It decides if and when belief alignment is needed and achieves it with communication. These changes improve the solver's performance: (a) communication is effectively used, and (b) robust for more realistic and challenging problems.
翻译:我们认为人类意识的任务规划问题,即人类机器人团队被赋予一项共同的任务,并有一个已知的目标要实现。最近的方法通过将它作为独立、理性的代理人团队进行模型化来解决。机器人计划执行两个代理人(共享)的任务。然而,机器人知道人类不能像人工代理一样被管理,因此它模仿和预测人类的决定、行动和反应。根据早先的方法,我们描述了一种解决此类问题的新办法,即模型和使用执行时间的可观察性公约。简而言之,这种模型是以情况评估为基础,它有助于我们的方法捕捉个体代理人信仰的演变,并预测实践中出现的信仰差异。它决定是否需要和何时信仰一致,并通过沟通实现这一点。这些变化改善了解决方案的绩效:(a) 通信得到有效利用,(b) 应对更现实和更具挑战性的问题。