Automated Machine Learning-based systems' integration into a wide range of tasks has expanded as a result of their performance and speed. Although there are numerous advantages to employing ML-based systems, if they are not interpretable, they should not be used in critical, high-risk applications where human lives are at risk. To address this issue, researchers and businesses have been focusing on finding ways to improve the interpretability of complex ML systems, and several such methods have been developed. Indeed, there are so many developed techniques that it is difficult for practitioners to choose the best among them for their applications, even when using evaluation metrics. As a result, the demand for a selection tool, a meta-explanation technique based on a high-quality evaluation metric, is apparent. In this paper, we present a local meta-explanation technique which builds on top of the truthfulness metric, which is a faithfulness-based metric. We demonstrate the effectiveness of both the technique and the metric by concretely defining all the concepts and through experimentation.
翻译:自动机学习系统由于其性能和速度,被纳入范围广泛的任务的范围扩大了。虽然采用以ML为基础的系统有许多好处,但如果这些系统不能解释,则不应用于人的生命面临危险的关键和高风险应用;为解决这一问题,研究人员和企业一直在集中寻找方法,改进复杂的ML系统的可解释性,并开发了几种方法。事实上,有如此多的先进技术,即使使用评价指标,从业人员也很难选择其中的最佳应用方法。因此,显然需要一种选择工具,一种基于高质量评价指标的元解析技术。在本文件中,我们介绍了一种以真实性衡量标准(即基于忠诚的衡量标准)为基础的本地元解析技术。我们通过具体界定所有概念和实验,来证明技术和衡量方法的有效性。