In the field of eXplainable AI (XAI), robust ``blackbox'' algorithms such as Convolutional Neural Networks (CNNs) are known for making high prediction performance. However, the ability to explain and interpret these algorithms still require innovation in the understanding of influential and, more importantly, explainable features that directly or indirectly impact the performance of predictivity. A number of methods existing in literature focus on visualization techniques but the concepts of explainability and interpretability still require rigorous definition. In view of the above needs, this paper proposes an interaction-based methodology -- Influence Score (I-score) -- to screen out the noisy and non-informative variables in the images hence it nourishes an environment with explainable and interpretable features that are directly associated to feature predictivity. We apply the proposed method on a real world application in Pneumonia Chest X-ray Image data set and produced state-of-the-art results. We demonstrate how to apply the proposed approach for more general big data problems by improving the explainability and interpretability without sacrificing the prediction performance. The contribution of this paper opens a novel angle that moves the community closer to the future pipelines of XAI problems.
翻译:在可氧化的 AI (XAI) 领域,“黑匣子”的强势算法,如“进化神经网络(CNNs)”以高预测性表现而著称,但解释和解释这些算法的能力仍然需要创新,以了解有影响力的、更重要的是可解释的、直接或间接影响预测性表现的特征。文献中存在的一些方法侧重于可视化技术,但可解释性和可解释性的概念仍需要严格的定义。鉴于上述需要,本文件提出一种基于互动的方法 -- -- 影响计分(I-score) -- -- 来筛选图像中的噪音和非信息变量,因此它孕育了一种具有可解释和可解释特征的环境,这些特征与特征预测性直接相关。我们在Pneumonia Chest X射线图像数据集中应用了拟议的方法,并产生了最新的结果。我们展示了如何通过改进可解释性和可解释性来将拟议的方法应用于更广义的大数据问题,同时不牺牲预测性表现。本文的贡献开启了X公司今后更接近问题的新途径。