Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.
翻译:几十年来,对可解释的人工智能领域的兴趣不断增长,最近也有所加速。随着人工智能模型变得更加复杂,而且随着复杂的机器学习技术的结合,其透明度也日益提高,解释性就变得更加重要。最近,研究人员一直在以用户为中心的重点调查和解决可解释性,寻找解释性,以考虑可信赖性、可理解性、明确出处和背景意识。在本章中,我们利用人工智能和密切相关领域的解释性文献调查,并利用这些过去的努力来产生一系列解释性类型,我们认为这些解释性类型反映了当今人工智能应用的扩大解释需要。我们界定了每一种类型,并提供了促使需要这种解释方式的示例问题。我们认为,这套解释性类型将有助于未来的系统设计者产生和确定需求的优先次序,并进一步帮助产生更符合用户和情况需要的解释。