Gender/ing guides how we view ourselves, the world around us, and each other--including non-humans. Critical voices have raised the alarm about stereotyped gendering in the design of socially embodied artificial agents like voice assistants, conversational agents, and robots. Yet, little is known about how this plays out in research and to what extent. As a first step, we critically reviewed the case of Pepper, a gender-ambiguous humanoid robot. We conducted a systematic review (n=75) involving meta-synthesis and content analysis, examining how participants and researchers gendered Pepper through stated and unstated signifiers and pronoun usage. We found that ascriptions of Pepper's gender were inconsistent, limited, and at times discordant, with little evidence of conscious gendering and some indication of researcher influence on participant gendering. We offer six challenges driving the state of affairs and a practical framework coupled with a critical checklist for centering gender in research on artificial agents.
翻译:关键的声音已经提醒人们,在设计具有社会色彩的人工代理人,如语音助理、谈话代理人和机器人时,定型性别观念已经引起人们的警觉。然而,对于这个问题在研究中如何发生以及在多大程度上发生,却知之甚少。作为第一步,我们严格地审查了一个性别模糊的人体机器人Pepper的案例。我们进行了一次系统审查(n=75),涉及元合成和内容分析,审查参与者和研究人员如何通过公开和未注明的信号和标语使用将Pepper性别化。我们发现Pepper的性别描述不一致、有限、有时是不一致的,几乎没有有意识的性别迹象和研究人员对参与者性别影响的迹象。我们提出了推动事态现状的六项挑战,并提出了一个实用框架,同时提出了在人工制剂研究中将性别作为中心的关键清单。