As AI chatbots become integrated in education, students are turning to these systems for guidance, feedback, and information. However, the anthropomorphic characteristics of these chatbots create ambiguity over whether students develop trust in them in ways similar to trusting a human peer or instructor (human-like trust, often linked to interpersonal trust models) or in ways similar to trusting a conventional technology (system-like trust, often linked to technology trust models). This ambiguity presents theoretical challenges, as interpersonal trust models may inappropriately ascribe human intentionality and morality to AI, while technology trust models were developed for non-social systems, leaving their applicability to conversational, human-like agents unclear. To address this gap, we examine how these two forms of trust, human-like and system-like, comparatively influence students' perceptions of an AI chatbot, specifically perceived enjoyment, trusting intention, behavioral intention to use, and perceived usefulness. Using partial least squares structural equation modeling, we found that both forms of trust significantly influenced student perceptions, though with varied effects. Human-like trust was the stronger predictor of trusting intention, whereas system-like trust more strongly influenced behavioral intention and perceived usefulness; both had similar effects on perceived enjoyment. The results suggest that interactions with AI chatbots give rise to a distinct form of trust, human-AI trust, that differs from human-human and human-technology models, highlighting the need for new theoretical frameworks in this domain. In addition, the study offers practical insights for fostering appropriately calibrated trust, which is critical for the effective adoption and pedagogical impact of AI in education.
翻译:随着人工智能聊天机器人在教育领域的融合应用,学生正越来越多地依赖这些系统获取指导、反馈与信息。然而,聊天机器人所具备的拟人化特性引发了一个关键问题:学生对其产生的信任究竟更类似于对同龄人或教师的人类式信任(常关联于人际信任模型),还是更接近于对传统技术的系统式信任(常关联于技术信任模型)。这种模糊性带来了理论挑战——人际信任模型可能不当地将人类意图与道德属性赋予人工智能,而技术信任模型最初是为非社交系统设计的,其在对话式拟人化智能体中的适用性尚不明确。为填补这一研究空白,本文探讨了人类式信任与系统式信任如何比较性地影响学生对AI聊天机器人的感知,具体包括感知愉悦度、信任意愿、使用行为意向及感知有用性。通过偏最小二乘结构方程模型分析,我们发现两种信任形式均显著影响学生感知,但作用路径存在差异:人类式信任对信任意愿的预测力更强,而系统式信任对行为意向和感知有用性的影响更为显著;两者对感知愉悦度的影响程度相近。研究结果表明,与AI聊天机器人的互动催生了一种独特的信任形态——人机信任,其既不同于人际信任模型,也区别于人机技术信任模型,这凸显了在该领域构建新理论框架的必要性。此外,本研究为培养适度校准的信任提供了实践启示,这对于人工智能在教育领域的有效应用及教学成效至关重要。