Trust between team members is an essential requirement for any successful cooperation. Thus, engendering and maintaining the fellow team members' trust becomes a central responsibility for any member trying to not only successfully participate in the task but to ensure the team achieves its goals. The problem of trust management is particularly challenging in mixed human-robot teams where the human and the robot may have different models about the task at hand and thus may have different expectations regarding the current course of action, thereby forcing the robot to focus on the costly explicable behavior. We propose a computational model for capturing and modulating trust in such iterated human-robot interaction settings, where the human adopts a supervisory role. In our model, the robot integrates human's trust and their expectations about the robot into its planning process to build and maintain trust over the interaction horizon. By establishing the required level of trust, the robot can focus on maximizing the team goal by eschewing explicit explanatory or explicable behavior without worrying about the human supervisor monitoring and intervening to stop behaviors they may not necessarily understand. We model this reasoning about trust levels as a meta reasoning process over individual planning tasks. We additionally validate our model through a human subject experiment.
翻译:团队成员之间的信任是任何成功合作的基本要求。 因此,培养和维护团队成员之间的信任对于试图不仅成功参与任务,而且确保团队实现其目标的任何成员来说,都是一种中心责任。 在混合型人类机器人团队中,信任管理问题特别具有挑战性,因为人类和机器人可能拥有关于当前任务的不同模型,因此对当前行动方针可能有不同的期望,从而可能迫使机器人专注于昂贵的可解释行为。我们建议了一个计算模型,用于捕捉和调节在这种反复的人类机器人互动环境中的信任,其中人类将发挥监督作用。在我们的模式中,机器人将人类的信任和他们对机器人的期望纳入其规划进程,以便在互动的视野中建立和保持信任。通过建立必要的信任水平,机器人可以专注于最大限度地实现团队目标,方法是在不担心人类监督员的监测情况下明确解释或可复制的行为,并进行干预以阻止他们可能不理解的行为。我们把这种关于信任水平的推论作为个人规划任务的元理性推理过程。我们又通过人类实验来验证我们的模型。</s>