The European Union has proposed the Artificial Intelligence Act intending to regulate AI systems, especially those used in high-risk, safety-critical applications such as healthcare. Among the Act's articles are detailed requirements for transparency and explainability. The field of explainable AI (XAI) offers technologies that could address many of these requirements. However, there are significant differences between the solutions offered by XAI and the requirements of the AI Act, for instance, the lack of an explicit definition of transparency. We argue that collaboration is essential between lawyers and XAI researchers to address these differences. To establish common ground, we give an overview of XAI and its legal relevance followed by a reading of the transparency and explainability requirements of the AI Act and the related General Data Protection Regulation (GDPR). We then discuss four main topics where the differences could induce issues. Specifically, the legal status of XAI, the lack of a definition of transparency, issues around conformity assessments, and the use of XAI for dataset-related transparency. We hope that increased clarity will promote interdisciplinary research between the law and XAI and support the creation of a sustainable regulation that fosters responsible innovation.
翻译:欧洲联盟提出了《人工情报法》,旨在规范AI系统,特别是高风险、安全关键应用,如医疗保健系统。该法的条款包括详细的透明度和可解释性要求。可解释的AI(XAI)领域提供了可满足许多这些要求的技术。然而,XAI提供的解决办法与AI法的要求之间有很大差异,例如缺乏透明度的明确定义。我们争辩说,律师和XAI研究人员之间的合作对于解决这些差异至关重要。为了确立共同点,我们概述了XAI及其法律相关性,然后阅读了AIA法和相关的一般数据保护条例的透明度和可解释性要求。然后,我们讨论了这些差异可能引起问题的四大主题。具体地说,XAI的法律地位、透明度定义的缺乏、关于一致性评估的问题以及XAI用于与数据集相关的透明度。我们希望,提高法律与XAI之间的清晰度将促进跨学科研究,并支持制定促进负责任创新的可持续条例。