This paper revisits the debate around the legal personhood of AI and robots, which has been highly sensitive yet important in the face of broad adoption of autonomous and self-learning systems. We conducted a survey ($N$=3,315) to understand lay people's perceptions of this topic and analyzed how they would assign responsibility, awareness, and punishment to AI, robots, humans, and various entities that could be held liable under existing doctrines. Even though people did not recognize any mental state for automated agents, they still attributed punishment and responsibility to these entities. While the participants mostly agreed AI systems could be reformed given punishment, they did not believe such punishment would achieve its retributive and deterrence functions. Moreover, participants were also unwilling to grant automated agents essential punishment preconditions, namely physical independence or assets. We term this contradiction the punishment gap. We also observe the same punishment gap on a demographically representative sample of U.S. residents ($N$=244). We discuss implications of these findings for how legal and social decisions could influence how the public attributes responsibility and punishment to automated agents.
翻译:本文回顾了关于AI和机器人的法律人格的辩论,在广泛采用自主和自学制度的情况下,这种法律人格一直非常敏感,但十分重要。我们进行了一项调查(3,315美元),以了解普通人对这一专题的看法,并分析他们如何将责任、认识和惩罚分派给AI、机器人、人类和根据现有理论可以承担责任的各种实体。即使人们不承认自动代理的任何精神状态,但他们仍然将惩罚和责任归咎于这些实体。虽然与会者大多同意的AI制度可以改革,但认为这种惩罚不会产生报复和威慑功能。此外,与会者也不愿意给予自动代理基本惩罚先决条件,即身体独立或资产。我们称之为惩罚差距。我们还注意到,在具有人口代表性的美国居民抽样中,在惩罚上也有同样的差距。我们讨论了这些结论对法律和社会决定如何影响公众将责任和惩罚归属于自动代理的影响。