Robots operating in close proximity to humans rely heavily on human trust in them to successfully complete their tasks. But what are the real outcomes when this trust is violated? Self-defense law provides a framework for analyzing tangible failure scenarios that can inform the design of robots and their algorithms. Studying self-defense is particularly important for ground robots since they operate within public human environments, where they can pose a legitimate threat to human safety. Moreover, even if ground robots can guarantee human safety, the perception of a threat is enough to warrant human self-defense against robots. In this paper, we synthesize works in law, engineering, and the social sciences to present four actionable recommendations for how the robotics community can craft robots to mitigate the likelihood of self-defense situations arising. We establish how current U.S. self-defense law can justify a human protecting themselves against a robot, discuss the current literature on human attitudes toward robots, and analyze methods that have been produced to allow robots to operate close to humans. Finally, we present hypothetical scenarios that underscore how current robot navigation methods can fail to sufficiently consider self-defense concerns and the need for the recommendations to guide improvements in the field.
翻译:机器人在人类附近运作时,极大地依赖人类对其的信任。但是当这种信任被违反时可能会导致哪些实际结果呢?自卫法为分析可行的失败场景提供了一个框架,可以为机器人及其算法的设计提供指导。对地面机器人进行自卫研究尤为重要,因为它们在人类的公共环境中运作,可能对人类安全构成合法威胁。此外,即使地面机器人可以保证人类安全,人类对机器人的威胁感知也足以引起自卫必要。本文综合法律、工程和社会科学的研究成果,提出了四个可操作的建议,指导机器人研究人员如何为机器人的设计和算法改进提供有利的框架,从而减少自卫场景的出现。我们阐述美国现行的自卫法如何为人保护自己免受机器人侵害提供法律依据,讨论当前的人类对机器人的态度,分析已有的适用于让机器人在人类附近运作的方法。最后,我们提出虚构情境,强调现有机器人导航方法如何未能充分考虑到自卫问题的需要,以及我们提出的建议指导如何改进该领域。