Trust is essential for sustaining cooperation among humans. The same principle applies during interaction with computers and robots: if we do not trust them, we will not accept help from them. Extensive evidence has shown that our trust in other agents depends on their performance. However, in uncertain environments, humans may not be able to estimate correctly other agents' performance, potentially leading to distrust or over-trust in peers and machines. In the current study, we investigate whether humans' trust towards peers, computers and robots is biased by prior beliefs in uncertain interactive settings. Participants made perceptual judgments and observed the simulated estimates of either a human participant, a computer or a social robot. Participants could modify their judgments based on this feedback. Results show that participants' belief about the nature of the interacting partner biased their compliance with the partners' judgments, although the partners' judgments were identical. Surprisingly, the social robot was trusted more than the computer and the human partner. Trust in the alleged human partner was not fully predicted by its perceived performance, suggesting the emergence of normative processes in peer interaction. Our findings offer novel insights in the understanding of the mechanisms underlying trust towards peers and autonomous agents.
翻译:同样的原则适用于与计算机和机器人的互动:如果我们不信任他们,我们将不接受他们的帮助。广泛的证据表明,我们对其他代理人的信任取决于他们的表现。然而,在不确定的环境中,人类可能无法正确估计其他代理人的表现,可能导致对同龄人和机器的不信任或不信任。在本研究报告中,我们调查人类对同龄人、计算机和机器人的信任是否因先前在不确定的互动环境中的信念而有偏向。与会者作出概念性判断,并观察了模拟的估计,无论是人参与者、计算机还是社会机器人。参与者可以根据这些反馈修改他们的判断。结果显示,参与者对互动伙伴的性质的信念有偏向于遵守伙伴的判断,尽管伙伴们的判断是相同的。令人惊讶的是,社会机器人比计算机和人类伙伴更受信任。对被指称的人类伙伴的信任不是完全根据其感知到的性能预测的,而是在同龄人互动中出现的规范程序。我们的调查结果为了解与同龄人和自主代理人之间的信任机制提供了新的见解。