As Machine Learning (ML) is now widely applied in many domains, in both research and industry, an understanding of what is happening inside the black box is becoming a growing demand, especially by non-experts of these models. Several approaches had thus been developed to provide clear insights of a model prediction for a particular observation but at the cost of long computation time or restrictive hypothesis that does not fully take into account interaction between attributes. This paper provides methods based on the detection of relevant groups of attributes -- named coalitions -- influencing a prediction and compares them with the literature. Our results show that these coalitional methods are more efficient than existing ones such as SHapley Additive exPlanation (SHAP). Computation time is shortened while preserving an acceptable accuracy of individual prediction explanations. Therefore, this enables wider practical use of explanation methods to increase trust between developed ML models, end-users, and whoever impacted by any decision where these models played a role.
翻译:由于机器学习(ML)目前已广泛应用于许多领域,包括研究领域和工业领域,对黑盒内正在发生的事情的理解正在成为日益增长的需求,特别是这些模型的非专家的需求,因此制定了几种办法,为特定观测提供模型预测的清晰见解,但以较长的计算时间或没有充分考虑到属性之间相互作用的限制性假设为代价,本文件提供了基于发现相关属性群体 -- -- 称为联盟 -- -- 影响预测并与文献进行比较的方法。我们的结果表明,这些联合方法比Shanapley Additive Explanation(SHAP)(SHAP)(SHAP)(SHAP)(SHAppley Appitive ExPlanation)(SHAP)(SHAP)(SHAP)等现有方法更为有效。计算时间缩短了,同时保持了个人预测解释的可接受的准确性。因此,这样可以更广泛地实际使用解释方法,以增加发达的ML模型、最终用户和受这些模型所起作用的任何决定影响者之间的信任。