Trust has emerged as a key factor in people's interactions with AI-infused systems. Yet, little is known about what models of trust have been used and for what systems: robots, virtual characters, smart vehicles, decision aids, or others. Moreover, there is yet no known standard approach to measuring trust in AI. This scoping review maps out the state of affairs on trust in human-AI interaction (HAII) from the perspectives of models, measures, and methods. Findings suggest that trust is an important and multi-faceted topic of study within HAII contexts. However, most work is under-theorized and under-reported, generally not using established trust models and missing details about methods, especially Wizard of Oz. We offer several targets for systematic review work as well as a research agenda for combining the strengths and addressing the weaknesses of the current literature.
翻译:信任已成为人们与AI-FFID系统互动的一个关键因素。然而,人们很少知道信任模式的使用以及哪些系统使用的模式:机器人、虚拟人物、智能工具、决策辅助工具或其他。此外,尚没有已知的衡量对AI信任的标准方法。这一范围界定审查从模式、措施和方法的角度,描绘了人类-AI互动信任的现状。调查结果表明,信任是HAI范围内一个重要和多面的研究主题。然而,大多数工作都是理论不足和少报的,一般没有使用既定的信任模式和缺少的方法细节,特别是奥兹巫师。我们提出了系统审查工作的若干目标,以及综合现有文献的优势和弱点的研究议程。