The paper argues that the material scope of AI regulations should not rely on the term "artificial intelligence (AI)". The argument is developed by proposing a number of requirements for legal definitions, surveying existing AI definitions, and then discussing the extent to which they meet the proposed requirements. It is shown that existing definitions of AI do not meet the most important requirements for legal definitions. Next, the paper argues that a risk-based approach would be preferable. Rather than using the term AI, policy makers should focus on the specific risks they want to reduce. It is shown that the requirements for legal definitions can be better met by defining the main sources of relevant risks: certain technical approaches (e.g. reinforcement learning), applications (e.g. facial recognition), and capabilities (e.g. the ability to physically interact with the environment). Finally, the paper discusses the extent to which this approach can also be applied to more advanced AI systems.
翻译:该文件认为,大赦国际条例的物质范围不应以 " 人工智能(AI) " 一词为依据。这一论点的提出是通过提出法律定义的若干要求,调查现有的大赦国际定义,然后讨论这些定义满足拟议要求的程度。该文件表明,大赦国际现有定义不符合法律定义的最重要要求。接着,该文件认为,基于风险的办法更为可取。与其使用大赦国际这一术语,决策者应侧重于他们希望减少的具体风险。文件表明,通过界定相关风险的主要来源,可以更好地满足法律定义的要求:某些技术方法(例如加强学习)、应用(例如面部识别)和能力(例如与环境进行身体互动的能力)。最后,文件讨论了这一方法在多大程度上也可适用于较先进的大赦国际系统。