Researchers have proposed various methods for visually interpreting the Convolutional Neural Network (CNN) via saliency maps, which include Class-Activation-Map (CAM) based approaches as a leading family. However, in terms of the internal design logic, existing CAM-based approaches often overlook the causal perspective that answers the core "why" question to help humans understand the explanation. Additionally, current CNN explanations lack the consideration of both necessity and sufficiency, two complementary sides of a desirable explanation. This paper presents a causality-driven framework, SUNY, designed to rationalize the explanations toward better human understanding. Using the CNN model's input features or internal filters as hypothetical causes, SUNY generates explanations by bi-directional quantifications on both the necessary and sufficient perspectives. Extensive evaluations justify that SUNY not only produces more informative and convincing explanations from the angles of necessity and sufficiency, but also achieves performances competitive to other approaches across different CNN architectures over large-scale datasets, including ILSVRC2012 and CUB-200-2011.
翻译:研究人员通过突出的地图提出了对革命神经网络(CNN)进行视觉解释的各种方法,其中包括以分类活动马普(CAM)为主的家庭,然而,在内部设计逻辑方面,现有的CAM(CAM)方法往往忽视解决核心“原因”问题的因果关系,以帮助人类理解解释。此外,目前的CNN解释没有考虑到必要性和充分性,这是理想解释的两个相辅相成的方面。本文提出了一个因果关系驱动框架,即SUNY,旨在合理解释,以增进人类理解。利用CNN模型的投入特征或内部过滤器作为假设原因,SUNY通过双向量化对必要和充分的观点作出解释。广泛的评估证明,SUNY不仅从必要性和充分性的角度产生更多资料和令人信服的解释,而且还在大规模数据集(包括ILSVRC2012和CUB-200-2011)方面实现与CNNC不同结构其他方法的竞争性。</s>