Interpreting the decision logic behind effective deep convolutional neural networks (CNN) on images complements the success of deep learning models. However, the existing methods can only interpret some specific decision logic on individual or a small number of images. To facilitate human understandability and generalization ability, it is important to develop representative interpretations that interpret common decision logics of a CNN on a large group of similar images, which reveal the common semantics data contributes to many closely related predictions. In this paper, we develop a novel unsupervised approach to produce a highly representative interpretation for a large number of similar images. We formulate the problem of finding representative interpretations as a co-clustering problem, and convert it into a submodular cost submodular cover problem based on a sample of the linear decision boundaries of a CNN. We also present a visualization and similarity ranking method. Our extensive experiments demonstrate the excellent performance of our method.
翻译:在解释有效深层神经神经网络(CNN)对图像的有效深层神经网络(CNN)背后的决定逻辑补充了深层学习模型的成功,但是,现有方法只能解释关于个人或少量图像的某些具体决定逻辑。为了便于人类理解和概括化能力,重要的是发展具有代表性的解释,解释CNN对大量类似图像的共同决定逻辑,其中显示共同的语义数据有助于许多密切相关的预测。在本文中,我们开发了一种新颖的、不受监督的方法,为大量类似图像产生具有高度代表性的解释。我们把找到具有代表性的解释作为一个联合集成的问题,并根据CNN线性决定界限的样本,将其转换成亚模式成本子模式覆盖问题。我们还提出了一个可视化和相似的排序方法。我们的广泛实验展示了我们方法的出色表现。