New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better. In particular, data-driven learning approaches (i.e., Machine Learning (ML)) have been a true revolution in the advancement of multiple technologies in various application domains. But at the same time there is growing concern about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights. Although there are mechanisms in the adoption process to minimize these risks (e.g., safety regulations), these do not exclude the possibility of harm occurring, and if this happens, victims should be able to seek compensation. Liability regimes will therefore play a key role in ensuring basic protection for victims using or interacting with these systems. However, the same characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self and continuous learning capabilities, may lead to considerable difficulties when it comes to proving causation. This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties. Specifically, we address the cases of cleaning robots, delivery drones and robots in education. The outcome of the proposed analysis suggests the need to revise liability regimes to alleviate the burden of proof on victims in cases involving AI technologies.
翻译:新兴技术所依赖的人工智能(AI)有潜力在各个领域彻底改变我们的社会。特别地,数据驱动的学习方法(即机器学习(ML))已经是多个应用领域中的一个真正的革命。然而,与此同时,我们也越发关注这些方法很多内在的特点对安全和基本权利构成潜在风险的问题。虽然在采用过程中有一些机制来最小化这些风险(例如安全规范),但这并不能排除发生伤害的可能性,一旦发生伤害,受害人应该有权寻求赔偿。因此,责任制度将在确保基本保护的受害人使用或与这些系统互动方面起到关键作用。然而,使AI系统天生存在的风险特征,例如缺乏因果关系、不透明性、不可预测性或其自我和连续学习能力,可能会在证明因果关系方面带来相当大的困难。本文提出三个案例研究以及达到这些结果的方法,这些案例研究阐述了这些困难。具体而言,我们涉及清洁机器人、投递无人机和教育机器人方面的案例。拟议分析的结果表明,在涉及AI技术的案例中,有必要修改责任制度以减轻受害人的证明责任负担。