The past decade has seen significant progress in artificial intelligence (AI), which has resulted in algorithms being adopted for resolving a variety of problems. However, this success has been met by increasing model complexity and employing black-box AI models that lack transparency. In response to this need, Explainable AI (XAI) has been proposed to make AI more transparent and thus advance the adoption of AI in critical domains. Although there are several reviews of XAI topics in the literature that identified challenges and potential research directions in XAI, these challenges and research directions are scattered. This study, hence, presents a systematic meta-survey for challenges and future research directions in XAI organized in two themes: (1) general challenges and research directions in XAI and (2) challenges and research directions in XAI based on machine learning life cycle's phases: design, development, and deployment. We believe that our meta-survey contributes to XAI literature by providing a guide for future exploration in the XAI area.
翻译:过去十年中,人工智能(AI)取得了显著进展,导致采用算法解决各种问题,然而,这一成功是通过日益复杂的模型和采用缺乏透明度的黑盒AI模型实现的,针对这一需要,建议可解释的AI(XAI)使AI更加透明,从而推动在关键领域采用AI。虽然文献中对XAI专题进行了若干审查,查明了 XAI的挑战和潜在的研究方向,但这些挑战和研究方向是分散的。因此,这项研究对XAI的挑战和未来研究方向进行了系统的元调查,按两个主题组织:(1) XAI的一般挑战和研究方向;(2) XAI基于机器学习生命周期阶段:设计、开发和部署的挑战和研究方向。我们认为,我们的元调查有助于XAI文献,为XAI领域今后的探索提供了指南。