As different research works report and daily life experiences confirm, learning models can result in biased outcomes. The biased learned models usually replicate historical discrimination in society and typically negatively affect the less represented identities. Robots are equipped with these models that allow them to operate, performing tasks more complex every day. The learning process consists of different stages depending on human judgments. Moreover, the resulting learned models for robot decisions rely on recorded labeled data or demonstrations. Therefore, the robot learning process is susceptible to bias linked to human behavior in society. This imposes a potential danger, especially when robots operate around humans and the learning process can reflect the social unfairness present today. Different feminist proposals study social inequality and provide essential perspectives towards removing bias in various fields. What is more, feminism allowed and still allows to reconfigure numerous social dynamics and stereotypes advocating for equality across people through their diversity. Consequently, we provide a feminist perspective on the robot learning process in this work. We base our discussion on intersectional feminism, community feminism, decolonial feminism, and pedagogy perspectives, and we frame our work in a feminist robotics approach. In this paper, we present an initial discussion to emphasize the relevance of feminist perspectives to explore, foresee, en eventually correct the biased robot decisions.
翻译:正如不同的研究工作报告和日常生活经验所证实的,学习模式可能会产生偏差结果。偏见的学习模式通常会复制历史上的社会歧视,通常会对代表性较低的身份产生负面的影响。机器人配备这些模式,允许他们运作,每天执行更复杂的任务。学习过程由不同阶段根据人类的判断而组成。此外,由此产生的机器人决定的学习模式取决于有记录的标签数据或演示。因此,机器人学习过程容易受到与社会中人类行为有关的偏见的影响。这造成了潜在的危险,特别是当机器人在人类周围操作时,而学习过程能够反映当今的社会不公平现象。不同的女权主义建议研究社会不平等,为在各个领域消除偏见提供了基本观点。更进一步的是,女性主义允许,而且仍然允许通过多样性来重新配置众多的社会动态和陈规定型观念,倡导人与人之间的平等。因此,我们对这项工作中的机器人学习过程提供了一种女权主义观点。我们的讨论基于相互交织的女权主义、社区女权主义、非殖民女性主义、女性主义和教化观点,我们用女权主义方法来设计我们的工作。在这份文件中,我们提出一种带有偏见的讨论,我们最终将性别观念来强调女权主义观点。