This study discusses the interplay between metrics used to measure the explainability of the AI systems and the proposed EU Artificial Intelligence Act. A standardisation process is ongoing: several entities (e.g. ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act and explainability metrics play a significant role. This study identifies the requirements that such a metric should possess to ease compliance with the AI Act. It does so according to an interdisciplinary approach, i.e. by departing from the philosophical concept of explainability and discussing some metrics proposed by scholars and standardisation entities through the lenses of the explainability obligations set by the proposed AI Act. Our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible & accessible. This is why we discuss the extent to which these requirements are met by the metrics currently under discussion.
翻译:这项研究讨论了用于衡量AI系统解释性的措施与拟议的欧盟人工智能法之间相互作用的问题。标准化进程正在进行中:一些实体(例如ISO)和学者正在讨论如何设计符合即将出台的法律的系统,解释性指标起着重要作用。这项研究确定了该指标应具备哪些要求,以方便遵守AI法。它按照跨学科方法这样做,即从解释性的哲学概念出发,通过拟议的AI法规定的解释性义务的透镜讨论学者和标准化实体提出的一些指标。我们的分析建议,衡量拟议的AI法所赞同的解释性的措施应当以风险为重点、模式性、目标意识、可理解性和可获取性为目的。这就是我们讨论目前讨论的衡量标准在多大程度上满足了这些要求的原因。