As robotic systems become more and more capable of assisting humans in their everyday lives, we must consider the opportunities for these artificial agents to make their human collaborators feel unsafe or to treat them unfairly. Robots can exhibit antisocial behavior causing physical harm to people or reproduce unfair behavior replicating and even amplifying historical and societal biases which are detrimental to humans they interact with. In this paper, we discuss these issues considering sociable robotic manipulation and fair robotic decision making. We propose a novel approach to learning fair and sociable behavior, not by reproducing positive behavior, but rather by avoiding negative behavior. In this study, we highlight the importance of incorporating sociability in robot manipulation, as well as the need to consider fairness in human-robot interactions.
翻译:随着机器人系统越来越有能力帮助人类日常生活,我们必须考虑这些人工代理商的机会,使他们的人类合作者感到不安全,或者不公平地对待他们。机器人可以表现出对人造成身体伤害的反社会行为,或者复制、甚至扩大有害于人类的历史和社会偏见的不公平行为。在本文中,我们讨论这些问题,考虑可移植的机器人操作和公平机器人决策。我们提出了一种新的方法来学习公平、可交流的行为,不是通过复制积极的行为,而是通过避免消极的行为。在本研究报告中,我们强调在机器人操作中引入共性的重要性,以及考虑人类机器人互动公平性的必要性。