Robots with anthropomorphic features are increasingly shaping how humans perceive and morally engage with them. Our research investigates how different levels of anthropomorphism influence protective responses to robot abuse, extending the Computers as Social Actors (CASA) and uncanny valley theories into a moral domain. In an experiment, we invite 201 participants to view videos depicting abuse toward a robot with low (Spider), moderate (Two-Foot), or high (Humanoid) anthropomorphism. To provide a comprehensive analysis, we triangulate three modalities: self-report surveys measuring emotions and uncanniness, physiological data from automated facial expression analysis, and qualitative reflections. Findings indicate that protective responses are not linear. The moderately anthropomorphic Two-Foot robot, rated highest in eeriness and "spine-tingling" sensations consistent with the uncanny valley, elicited the strongest physiological anger expressions. Self-reported anger and guilt are significantly higher for both the Two-Foot and Humanoid robots compared to the Spider. Qualitative findings further reveal that as anthropomorphism increases, moral reasoning shifts from technical assessments of property damage to condemnation of the abuser's character, while governance proposals expand from property law to calls for quasi-animal rights and broader societal responsibility. These results suggest that the uncanny valley does not dampen moral concern but paradoxically heightens protective impulses, offering critical implications for robot design, policy, and future legal frameworks.
翻译:具有拟人化特征的机器人正日益影响人类对其的感知与道德互动方式。本研究探讨了不同拟人化水平如何影响对机器人虐待的保护性反应,将“计算机作为社会行动者”(CASA)理论与恐怖谷理论延伸至道德领域。在一项实验中,我们邀请201名参与者观看描绘对低拟人化(蜘蛛型)、中等拟人化(双足型)或高拟人化(人形)机器人实施虐待的视频。为进行全面分析,我们采用三种模态进行三角验证:测量情绪与诡异感的自我报告量表、基于自动面部表情分析的生理数据,以及质性反思记录。研究结果表明保护性反应并非线性变化。中等拟人化的双足型机器人被评定为具有最高的诡异感与“脊背发凉”感(符合恐怖谷效应),同时引发了最强烈的生理性愤怒表情表达。与蜘蛛型机器人相比,双足型和人形机器人在自我报告的愤怒与内疚情绪上均显著更高。质性研究进一步揭示:随着拟人化程度提升,道德推理从对财产损害的技术评估转向对施虐者品性的谴责,而治理建议也从财产法范畴扩展至准动物权利诉求及更广泛的社会责任呼吁。这些结果表明,恐怖谷效应并未削弱道德关注,反而矛盾地强化了保护性冲动,为机器人设计、政策制定及未来法律框架提供了重要启示。