As artificial intelligence (AI) systems become increasingly embedded in everyday life, the ability of interactive agents to express empathy has become critical for effective human-AI interaction, particularly in emotionally sensitive contexts. Rather than treating empathy as a binary capability, this study examines how different levels of empathic expression in virtual human interaction influence user experience. We conducted a between-subject experiment (n = 70) in a counseling-style interaction context, comparing three virtual human conditions: a neutral dialogue-based agent, a dialogue-based empathic agent, and a video-based empathic agent that incorporates users' facial cues. Participants engaged in a 15-minute interaction and subsequently evaluated their experience using subjective measures of empathy and interaction quality. Results from analysis of variance (ANOVA) revealed significant differences across conditions in affective empathy, perceived naturalness of facial movement, and appropriateness of facial expression. The video-based empathic expression condition elicited significantly higher affective empathy than the neutral baseline (p < .001) and marginally higher levels than the dialogue-based condition (p < .10). In contrast, cognitive empathy did not differ significantly across conditions. These findings indicate that empathic expression in virtual humans should be conceptualized as a graded design variable, rather than a binary capability, with visually grounded cues playing a decisive role in shaping affective user experience.
翻译:随着人工智能系统日益融入日常生活,交互式代理表达移情的能力对于有效的人机交互变得至关重要,尤其在情感敏感的情境中。本研究并未将移情视为二元能力,而是探究虚拟人交互中不同水平的移情表达如何影响用户体验。我们在咨询式交互情境中开展了一项组间实验(n=70),比较三种虚拟人条件:基于对话的中性代理、基于对话的移情代理,以及融合用户面部线索的基于视频的移情代理。参与者进行15分钟的交互后,通过移情与交互质量的主观测量评估其体验。方差分析结果显示,不同条件在情感移情、面部运动感知自然度及面部表情适宜性方面存在显著差异。基于视频的移情表达条件引发的情感移情显著高于中性基线条件(p<.001),并略高于基于对话的条件(p<.10)。相比之下,认知移情在不同条件间未呈现显著差异。这些发现表明,虚拟人的移情表达应被概念化为分级设计变量而非二元能力,其中基于视觉的线索在塑造用户情感体验方面起着决定性作用。