In the field of eXplainable AI (XAI), robust "blackbox" algorithms such as Convolutional Neural Networks (CNNs) are known for making high prediction performance. However, the ability to explain and interpret these algorithms still require innovation in the understanding of influential and, more importantly, explainable features that directly or indirectly impact the performance of predictivity. A number of methods existing in literature focus on visualization techniques but the concepts of explainability and interpretability still require rigorous definition. In view of the above needs, this paper proposes an interaction-based methodology -- Influence Score (I-score) -- to screen out the noisy and non-informative variables in the images hence it nourishes an environment with explainable and interpretable features that are directly associated to feature predictivity. We apply the proposed method on a real world application in Pneumonia Chest X-ray Image data set and produced state-of-the-art results. We demonstrate how to apply the proposed approach for more general big data problems by improving the explainability and interpretability without sacrificing the prediction performance. The contribution of this paper opens a novel angle that moves the community closer to the future pipelines of XAI problems.
翻译:在可氧化的 AI (XAI) 领域,众所周知,强大的“黑盒”算法,如进化神经网络(CNNs),具有很高的预测性;然而,解释和解释这些算法的能力仍然需要创新,了解有影响力的、更重要的是,可解释的特征,直接或间接地影响预测性的表现。文献中的一些方法侧重于可视化技术,但可解释性和可解释性的概念仍需要严格的定义。鉴于上述需要,本文建议一种基于互动的方法 -- -- 影响计分(I-score) -- -- 来筛选图像中的噪音和非信息化变量,因此它孕育一种环境,具有可解释和可解释的特征,与预测性直接相关。我们在Pneumonia Chest X射线图像数据集中应用了拟议的方法,并产生了最新的结果。我们展示了如何通过改进预测性能的可解释性和可解释性来将拟议的方法应用于更一般性的大数据问题。本文的贡献打开了一个新的角度,将XAI公司社区更接近未来管道问题。