As AI systems are increasingly involved in decision making, it also becomes important that they elicit appropriate levels of trust from their users. To achieve this, it is first important to understand which factors influence trust in AI. We identify that a research gap exists regarding the role of personal values in trust in AI. Therefore, this paper studies how human and agent Value Similarity (VS) influences a human's trust in that agent. To explore this, 89 participants teamed up with five different agents, which were designed with varying levels of value similarity to that of the participants. In a within-subjects, scenario-based experiment, agents gave suggestions on what to do when entering the building to save a hostage. We analyzed the agent's scores on subjective value similarity, trust and qualitative data from open-ended questions. Our results show that agents rated as having more similar values also scored higher on trust, indicating a positive effect between the two. With this result, we add to the existing understanding of human-agent trust by providing insight into the role of value-similarity.
翻译:由于大赦国际系统越来越多地参与决策,还必须从用户那里获得适当程度的信任。为了做到这一点,首先必须了解哪些因素影响对大赦国际的信任。我们发现,个人价值在大赦国际的信任中的作用存在研究差距。因此,本文件研究了人类和代理人价值相似性(VS)如何影响人类对该代理人的信任。为了探索这一点,89名参与者与5个不同的代理人结成了团队,这些代理人设计的价值水平与参与者的不同。在一项内科、基于情景的实验中,代理人就进入大楼拯救人质时应做什么提出了建议。我们分析了该代理人主观价值相似性、信任和来自开放式问题的定性数据的得分。我们的结果显示,被评为更相似价值的代理人在信任上也获得了更高的分数,表明了两者之间的积极影响。通过这一结果,我们通过深入了解价值相似性的作用,增加了现有的对人体代理人信任的理解。