In addition to the impressive predictive power of machine learning (ML) models, more recently, explanation methods have emerged that enable an interpretation of complex non-linear learning models such as deep neural networks. Gaining a better understanding is especially important e.g. for safety-critical ML applications or medical diagnostics etc. While such Explainable AI (XAI) techniques have reached significant popularity for classifiers, so far little attention has been devoted to XAI for regression models (XAIR). In this review, we clarify the fundamental conceptual differences of XAI for regression and classification tasks, establish novel theoretical insights and analysis for XAIR, provide demonstrations of XAIR on genuine practical regression problems, and finally discuss the challenges remaining for the field.
翻译:除了机器学习模型令人印象深刻的预测力之外,最近还出现了解释方法,以便能够解释复杂的非线性学习模型,如深神经网络,加深理解尤其重要,例如安全临界 ML应用或医疗诊断等。虽然这种可解释的AI(XAI)技术对分类者十分受欢迎,但迄今为止,对回归模型(XAIR)对XAI的注意很少。在本次审查中,我们澄清了XAI在回归和分类任务方面的基本概念差异,为XAIR建立了新的理论见解和分析,为XAIR提供了XAIR关于真正实际回归问题的演示,并最终讨论了实地仍然存在的挑战。