Explaining decisions made by deep neural networks is a rapidly advancing research topic. In recent years, several approaches have attempted to provide visual explanations of decisions made by neural networks designed for structured 2D image input data. In this paper, we propose a novel approach to generate coarse visual explanations of networks designed to classify unstructured 3D data, namely point clouds. Our method uses gradients flowing back to the final feature map layers and maps these values as contributions of the corresponding points in the input point cloud. Due to dimensionality disagreement and lack of spatial consistency between input points and final feature maps, our approach combines gradients with points dropping to compute explanations of different parts of the point cloud iteratively. The generality of our approach is tested on various point cloud classification networks, including 'single object' networks PointNet, PointNet++, DGCNN, and a 'scene' network VoteNet. Our method generates symmetric explanation maps that highlight important regions and provide insight into the decision-making process of network architectures. We perform an exhaustive evaluation of trust and interpretability of our explanation method against comparative approaches using quantitative, quantitative and human studies. All our code is implemented in PyTorch and will be made publicly available.
翻译:深心神经网络做出的决定是一个迅速推进的研究主题。近年来,一些方法试图对为结构化 2D 图像输入数据设计的神经网络做出的决定提供直观解释。在本文件中,我们提出一种新的方法,对用于对非结构化 3D 数据(即点云)进行分类的网络进行粗略的直观解释。我们的方法是使用向最后特征地图层流回的梯度,并将这些值作为输入点云中相应点的贡献。由于输入点和最终地貌地图之间在空间上存在差异和缺乏一致性,我们的方法是将梯度与下降点对点云不同部分的解析点结合起来。我们的方法在各种点云分类网络上测试了我们的方法的一般性,包括“单对象网络点网”、“点网点网”、“点网++”、“DGCNNN”和“赛恩”网络投票网。我们的方法产生对等分解图,以突出重要区域,并对网络结构的决策过程提供洞察。我们用定量、定量和人文研究对我们的解释方法进行彻底的信任和解释性评价,将公开执行所有代码。