Through the General Data Protection Regulation (GDPR), the European Union has set out its vision for Automated Decision- Making (ADM) and AI, which must be reliable and human-centred. In particular we are interested on the Right to Explanation, that requires industry to produce explanations of ADM. The High-Level Expert Group on Artificial Intelligence (AI-HLEG), set up to support the implementation of this vision, has produced guidelines discussing the types of explanations that are appropriate for user-centred (interactive) Explanatory Tools. In this paper we propose our version of Explanatory Narratives (EN), based on user-centred concepts drawn from ISO 9241, as a model for user-centred explanations aligned with the GDPR and the AI-HLEG guidelines. Through the use of ENs we convert the problem of generating explanations for ADM into the identification of an appropriate path over an Explanatory Space, allowing explainees to interactively explore it and produce the explanation best suited to their needs. To this end we list suitable exploration heuristics, we study the properties and structure of explanations, and discuss the proposed model identifying its weaknesses and strengths.
翻译:通过数据保护总条例,欧洲联盟提出了自动化决策(ADM)和AI的愿景,这些愿景必须是可靠和以人为本的;我们特别关心解释权,这要求工业界对ADM作出解释;为支持落实这一愿景而设立的人工智能高级别专家组(AI-HLEG)制定了指导方针,讨论了适合以用户为中心的(互动的)解释工具的解释类型;在本文件中,我们根据从ISO 9241中提取的以用户为中心的概念,提出了解释性说明(EN)的版本,作为以用户为中心的解释模式,与GDPR和AI-HLEG准则相一致;通过使用ENs,我们把为ADM提出解释的问题转化为确定超出解释空间的适当途径,让解释者进行互动探讨,提出最适合其需要的解释;为此,我们列出了适当的探讨性,我们研究了解释的特性和结构,并讨论了拟议的模式,确定了其弱点和优点。