Regulating artificial intelligence (AI) has become necessary in light of its deployment in high-risk scenarios. This paper explores the proposal to extend legal personhood to AI and robots, which had not yet been examined through the lens of the general public. We present two studies (N = 3,559) to obtain people's views of electronic legal personhood vis-\`a-vis existing liability models. Our study reveals people's desire to punish automated agents even though these entities are not recognized any mental state. Furthermore, people did not believe automated agents' punishment would fulfill deterrence nor retribution and were unwilling to grant them legal punishment preconditions, namely physical independence and assets. Collectively, these findings suggest a conflict between the desire to punish automated agents and its perceived impracticability. We conclude by discussing how future design and legal decisions may influence how the public reacts to automated agents' wrongdoings.
翻译:鉴于人工智能(AI)在高风险情况下的部署情况,对人工智能(AI)的监管变得十分必要。本文件探讨了关于将法人资格扩大到人工智能和机器人的提案,该提案尚未经过公众的审视。我们提出了两项研究(N=3,559),以了解人们对电子法人地位的看法,并对现有责任模式的看法。我们的研究揭示了人们希望惩罚自动化代理人,即使这些实体不被承认任何精神状态。此外,人们并不认为自动化代理人的处罚会起到威慑或报复作用,并且不愿意给予他们法律惩罚的先决条件,即身体独立和资产。这些结论共同表明,惩罚自动代理人的愿望与其认为的不可行性之间存在冲突。我们最后通过讨论未来的设计和法律决定会如何影响公众如何对自动代理人的错失行为作出反应。