As human science pushes the boundaries towards the development of artificial intelligence (AI), the sweep of progress has caused scholars and policymakers alike to question the legality of applying or utilising AI in various human endeavours. For example, debate has raged in international scholarship about the legitimacy of applying AI to weapon systems to form lethal autonomous weapon systems (LAWS). Yet the argument holds true even when AI is applied to a military autonomous system that is not weaponised: how does one hold a machine accountable for a crime? What about a tort? Can an artificial agent understand the moral and ethical content of its instructions? These are thorny questions, and in many cases these questions have been answered in the negative, as artificial entities lack any contingent moral agency. So what if the AI is not alone, but linked with or overseen by a human being, with their own moral and ethical understandings and obligations? Who is responsible for any malfeasance that may be committed? Does the human bear the legal risks of unethical or immoral decisions by an AI? These are some of the questions this manuscript seeks to engage with.
翻译:随着人类科学将界限推向发展人工智能(AI),进步的一扫而空,学者和决策者都对应用或使用人工智能(AI)在各种人类活动中的合法性提出质疑。例如,关于将人工智能应用于武器系统以形成致命自主武器系统的合法性的国际奖学金辩论激烈。然而,即使将人工智能应用于非武器化的军事自主系统,这一论点也是正确的:一个人如何追究机器对犯罪的责任?关于侵权?人工机器人能理解其指令的道德和伦理内容吗?这些是棘手的问题,在许多情况下,这些问题是否定的,因为人工实体缺乏任何应急道德机构。因此,如果人工智能不是孤立的,而是与一个人联系在一起的,或由一个人监督的,有其自身的道德和道德理解和义务?谁对可能犯下的任何错误负责?谁对可能犯下的任何错误负责?一个人承担不道德或不道德决定的法律风险?由人工智能做出的法律风险呢?这些是本文试图处理的问题中的一些问题。