The lack of explainability of a decision from an Artificial Intelligence (AI) based "black box" system/model, despite its superiority in many real-world applications, is a key stumbling block for adopting AI in many high stakes applications of different domain or industry. While many popular Explainable Artificial Intelligence (XAI) methods or approaches are available to facilitate a human-friendly explanation of the decision, each has its own merits and demerits, with a plethora of open challenges. We demonstrate popular XAI methods with a mutual case study/task (i.e., credit default prediction), analyze for competitive advantages from multiple perspectives (e.g., local, global), provide meaningful insight on quantifying explainability, and recommend paths towards responsible or human-centered AI using XAI as a medium. Practitioners can use this work as a catalog to understand, compare, and correlate competitive advantages of popular XAI methods. In addition, this survey elicits future research directions towards responsible or human-centric AI systems, which is crucial to adopt AI in high stakes applications.
翻译:以人工智能为基础的“黑匣子”系统/模型尽管在许多现实世界应用中具有优越性,但是其决定缺乏解释性,这是在不同领域或行业的许多高利害应用中采用人工智能的关键绊脚石。虽然有许多受欢迎的可解释性人工智能(XAI)方法或方法可以便利对决定作出人道友好的解释,但每种方法都有其自身的优点和缺点,并有许多公开的挑战。我们展示了流行的XAI方法,包括相互的案例研究/任务(即信用违约预测),从多种角度(例如,地方、全球)分析竞争优势,对量化解释提供有意义的洞察力,并以XAI为媒介,提出走向负责任或以人为中心的人工智能的途径。从业者可以将这项工作作为目录,了解、比较、比较大众的XAI方法具有的相关竞争优势。此外,这项调查还揭示了今后对负责任或以人为中心的人工智能系统的研究方向,这对于在高风险应用中采用AI至关重要。