User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption. It has been suggested that AI-enabled systems must go beyond technical-centric approaches and towards embracing a more human centric approach, a core principle of the human-computer interaction (HCI) field. This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies to gather insight for future technical and design strategies, research, and initiatives to calibrate the user AI relationship. The findings confirm that there is more than one way to define trust. Selecting the most appropriate trust definition to depict user trust in a specific context should be the focus instead of comparing definitions. User trust in AI-enabled systems is found to be influenced by three main themes, namely socio-ethical considerations, technical and design features, and user characteristics. User characteristics dominate the findings, reinforcing the importance of user involvement from development through to monitoring of AI enabled systems. In conclusion, user trust needs to be addressed directly in every context where AI-enabled systems are being used or discussed. In addition, calibrating the user-AI relationship requires finding the optimal balance that works for not only the user but also the system.
翻译:用户对AI启用系统的信任已被越来越多地认识到并证明是促进其采纳的关键因素。有人认为,AI启用系统必须超越以技术为中心的方法,转而拥抱更具人性化的方法,这是人机交互(HCI)领域的核心原则。本文综述了23篇实证研究中的用户信任定义、影响因素和测量方法,为未来技术和设计策略、研究和倡议提供了洞察。研究结果确认,有多种定义信任的方式。选择最适合描述特定环境下用户信任的定义应成为焦点,而非比较定义。发现用户对AI启用系统的信任受到三个主要主题的影响,即社会伦理考虑、技术和设计特征以及用户特征。用户特征占据了研究结果,强调了从开发到监测AI启用系统时用户参与的重要性。总之,用户对AI启用系统的信任需要在每个使用或讨论AI启用系统的环境中直接加以解决。此外,校准用户与AI之间的关系需要寻找对用户和系统都适用的最佳平衡。