Trust is essential in shaping human interactions with one another and with robots. This paper discusses how human trust in robot capabilities transfers across multiple tasks. We first present a human-subject study of two distinct task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers. The findings expand our understanding of trust and inspire new differentiable models of trust evolution and transfer via latent task representations: (i) a rational Bayes model, (ii) a data-driven neural network model, and (iii) a hybrid model that combines the two. Experiments show that the proposed models outperform prevailing models when predicting trust over unseen tasks and users. These results suggest that (i) task-dependent functional trust models capture human trust in robot capabilities more accurately, and (ii) trust transfer across tasks can be inferred to a good degree. The latter enables trust-mediated robot decision-making for fluent human-robot interaction in multi-task settings.
翻译:本文讨论人类对机器人能力的信任如何跨越多种任务。我们首先提出对两个不同任务领域的人类主题研究:一个执行家务任务的机器人,一个执行驾驶和停车动作的自主飞行器虚拟现实模拟。 研究结果扩大了我们对信任的理解,并激发了信任演化和通过潜在任务表现方式转移的新的不同模式:(一) 理性的贝耶斯模型,(二) 数据驱动的神经网络模型,以及(三) 将两者结合起来的混合模型。实验显示,在预测对看不见任务和用户的信任时,拟议模型优于流行模型。这些结果表明,(一) 任务依赖功能信任模式更准确地掌握了人类对机器人能力的信任,(二) 跨任务的信任转移可以很好地推断出来。 后者使得在多任务环境中能够以信任为媒介的机器人决策为媒介,在流出人类-机器人的互动。