How do people build up trust with artificial agents? Here, we study a key component of interpersonal trust: people's ability to evaluate the competence of another agent across repeated interactions. Prior work has largely focused on appraisal of simple, static skills; in contrast, we probe competence evaluations in a rich setting with agents that learn over time. Participants played a video game involving physical reasoning paired with one of four artificial agents that suggested moves each round. We measure participants' decisions to accept or revise their partner's suggestions to understand how people evaluated their partner's ability. Overall, participants collaborated successfully with their agent partners; however, when revising their partner's suggestions, people made sophisticated inferences about the competence of their partner from prior behavior. Results provide a quantitative measure of how people integrate a partner's competence into their own decisions and may help facilitate better coordination between humans and artificial agents.
翻译:人们如何与人造代理人建立信任?在这里,我们研究人际信任的一个关键组成部分:人们在反复互动中评估另一个代理人的能力的能力;先前的工作主要侧重于对简单、静态技能的评估;相反,我们在与长期学习的代理人的丰富环境中探索能力评价;参与者玩了一种视频游戏,涉及物理推理和四个建议每轮移动的人工代理人之一。我们衡量参与者接受或修改其伙伴建议的决定,以了解人们如何评价其伙伴的能力。总体而言,参与者与他们的代理人伙伴成功合作;然而,在修改其伙伴的建议时,人们从先前的行为中对其伙伴的能力进行了精密的推论。结果提供了人们如何将伙伴的能力纳入自己决定的量化尺度,并可能帮助促进人与人造代理人之间更好的协调。