Trust between team members is an essential requirement for any successful cooperation. Thus, engendering and maintaining the fellow team members' trust becomes a central responsibility for any member trying to not only successfully participate in the task but to ensure the team achieves its goals. The problem of trust management is particularly challenging in mixed human-robot teams where the human and the robot may have different models about the task at hand and thus may have different expectations regarding the current course of action and forcing the robot to focus on the costly explicable behavior. We propose a computational model for capturing and modulating trust in such longitudinal human-robot interaction, where the human adopts a supervisory role. In our model, the robot integrates human's trust and their expectations from the robot into its planning process to build and maintain trust over the interaction horizon. By establishing the required level of trust, the robot can focus on maximizing the team goal by eschewing explicit explanatory or explicable behavior without worrying about the human supervisor monitoring and intervening to stop behaviors they may not necessarily understand. We model this reasoning about trust levels as a meta reasoning process over individual planning tasks. We additionally validate our model through a human subject experiment.
翻译:团队成员之间的信任是任何成功合作的基本要求。 因此,培养和维护团队成员之间的信任对于试图不仅成功参与任务,而且确保团队实现其目标的任何成员来说,都是一种中心责任。 在混合型人类机器人团队中,信任管理问题特别具有挑战性,因为人类和机器人可能拥有关于当前任务的不同模型,因此对当前行动方针可能有不同的期望,并迫使机器人专注于昂贵的可解释行为。我们提出了一个获取和调节信任的计算模型,用于捕捉和调节对人类机器人长期性互动的信任,其中人类将发挥监督作用。在模型中,机器人将人类信任及其对机器人的期望纳入到其规划进程中,以便在互动视野上建立和维持信任。通过建立必要的信任水平,机器人可以侧重于最大限度地实现团队目标,方法是避免对监督员的监测工作有明确的解释或可复制的行为,并阻止他们可能不一定理解的行为。我们把这种信任水平作为个人规划任务的一种元理性过程,我们通过人类实验来进一步验证我们的模型。