New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better. In particular, data-driven learning approaches (i.e., Machine Learning (ML)) have been a true revolution in the advancement of multiple technologies in various application domains. But at the same time there is growing concerns about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights. Although there are mechanisms in the adoption process to minimize these risks (e.g., safety regulations), these do not exclude the possibility of harm occurring, and if this happens, victims should be able to seek compensation. Liability regimes will therefore play a key role in ensuring basic protection for victims using or interacting with these systems. However, the same characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self and continuous learning capabilities, lead to considerable difficulties when it comes to proving causation. This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties. Specifically, we address the cases of cleaning robots, delivery drones and robots in education. The outcome of the proposed analysis suggests the need to revise liability regimes to alleviate the burden of proof on victims in cases involving AI technologies.
翻译:由人工智能(人工智能)所推动的新兴技术有可能使我们的社会发生更好的破坏性变化,特别是,数据驱动的学习方法(即机器学习(ML))是各种应用领域多种技术进步的真正革命,但与此同时,人们日益关注这些方法的某些内在特点对安全和基本权利都具有潜在风险;虽然在采用过程中存在着尽量减少这些风险的机制(例如安全条例),但这些机制不排除发生伤害的可能性,如果发生这种情况,受害者应当能够寻求赔偿。因此,责任制度在确保使用这些系统或与这些系统互动的受害者获得基本保护方面将发挥关键作用。然而,使AI系统具有内在风险的同样特点,例如缺乏因果关系、不透明、不可预测性或其自我和持续学习能力,在证明因果关系时将带来相当大的困难。本文件介绍了三个案例研究以及实现这些目标的方法,说明了这些困难。我们讨论了清洁机器人、交付无人机和机器人在教育中进行校验的案例。拟议分析的结果表明,在涉及AI的案例中,受害者需要修订责任制度,以便减轻责任。