Explainable Artificial Intelligence (XAI) has experienced a significant growth over the last few years. This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models but lack explainability and interpretability. A plethora of methods to tackle this problem have been proposed, developed and tested. This systematic review contributes to the body of knowledge by clustering these methods with a hierarchical classification system with four main clusters: review articles, theories and notions, methods and their evaluation. It also summarises the state-of-the-art in XAI and recommends future research directions.
翻译:过去几年来,可解释的人工智能(XAI)有了显著增长,这是因为机器学习的广泛应用,特别是深层学习,导致开发了非常准确的模式,但缺乏解释和可解释性,已经提出、制定和测试了大量解决这一问题的方法,通过将这些方法与四大类分级分类系统(审查文章、理论和概念、方法及其评价)相结合,系统审查有助于知识的形成,包括审查文章、理论和概念、方法及其评价,并总结了XAI的最新技术,建议了未来的研究方向。