The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.
翻译:拟议的《欧洲人工情报法》(AIA)是首次尝试为任何主要全球经济开展的大赦国际制定总的法律框架,因此,AIA有可能成为关于AI系统如何(和应当)监管的更广泛讨论的参照点,在本条中,我们介绍和讨论AIA提出的两个主要执行机制:高风险AI系统供应商预期会进行的符合性评估,以及供应商必须建立的后市场监测计划,以记录高风险AI系统在整个生命周期的绩效。我们认为,AIA可以被解释为建立一个全欧洲范围进行AI审计的生态系统的建议,尽管换句话说。我们的分析提供了两个主要贡献。首先,通过描述AIAIA在从现有关于AI审计的文献中借用的术语中包括的执行机制,我们帮助AI系统供应商了解它们如何能够证明遵守AIA实践中的要求。第二,从审计的角度审查AIA系统,我们寻求从以前的研究中吸取经验教训,说明如何进一步完善AIA中概述的监管方法,尽管换句话说,我们的分析提供了两种主要贡献。我们通过描述AIA中所包含的执行机制机制的七个方面,我们只是将澄清对AIA的核查标准加以澄清。