This work proposes a framework that incorporates trust in an ad hoc teamwork scenario with human-agent teams, where an agent must collaborate with a human to perform a task. During the task, the agent must infer, through interactions and observations, how much the human trusts it and adapt its behaviour to maximize the team's performance. To achieve this, we propose collecting data from human participants in experiments to define different settings (based on trust levels) and learning optimal policies for each of them. Then, we create a module to infer the current setting (depending on the amount of trust). Finally, we validate this framework in a real-world scenario and analyse how this adaptable behaviour affects trust.
翻译:这项工作提议了一个框架,将信任纳入与人类试剂团队的特设团队协作设想中,其中代理人必须与人合作执行任务。在任务执行过程中,代理人必须通过互动和观察推断出其对人类的信任程度,并调整其行为,以最大限度地提高团队的绩效。为此,我们提议从人类参与者收集数据,以进行界定不同环境的实验(基于信任水平),并学习每种环境的最佳政策。然后,我们建立一个模块,以推断目前的环境(取决于信任程度)。最后,我们在现实世界情景中验证这一框架,并分析这一适应行为如何影响信任。