The optimal liability framework for AI systems remains an unsolved problem across the globe. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive and a revision of the Product Liability Directive. They constitute the final, and much-anticipated, cornerstone of AI regulation in the EU. Crucially, the liability proposals and the EU AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a Brussels effect in AI regulation, with significant consequences for the US and other countries. This paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article suggests amendments, which are collected in an Annex at the end of the paper. Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. This includes: a comprehensive framework for AI liability; provisions to support innovation; an extension to non-discrimination/algorithmic fairness, as well as explainable AI; and sustainability. I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).
翻译:AI系统的最佳责任框架仍然是全球范围尚未解决的问题。在一项期待已久的举措中,欧盟委员会于2022年9月提出了两项提案,概述了欧洲对AI责任的处理方法,并提出了新的AI责任指令和产品责任指令的修订。这两项提案构成了欧盟AI条例的最终和期待已久的基石。关键的是,责任提案和欧盟AI法在本质上相互交织:后者并不包含受影响者的任何个人权利,前者在AI的开发和部署方面缺乏具体的实质性规则。加在一起,这些行为很可能在AI的监管中引发布鲁塞尔效应,给美国和其他国家带来严重后果。本文件提出了三项新的贡献。首先,它详细审查了AI责任指令和产品责任指令的修订。它们最终代表了一个半心全心全意的方法:如果按预期颁布,欧盟的AI责任将主要依赖于披露证据机制,以及一套关于I的错误、缺陷和因果关系的狭义推定。因此,第二,这些条款提出了修正,但并非在本文结尾的附件中收集的,对AI的公平性影响。 本文根据对公司责任框架的全面设计框架,在AI 上提出了一条明确的评估。