We propose a novel method that trains a conditional Generative Adversarial Network (GAN) to generate visual interpretations of a Convolutional Neural Network (CNN). To comprehend a CNN, the GAN is trained with information on how the CNN processes an image when making predictions. Supplying that information has two main challenges: how to represent this information in a form that is feedable to the GANs and how to effectively feed the representation to the GAN. To address these issues, we developed a suitable representation of CNN architectures by cumulatively averaging intermediate interpretation maps. We also propose two alternative approaches to feed the representations to the GAN and to choose an effective training strategy. Our approach learned the general aspects of CNNs and was agnostic to datasets and CNN architectures. The study includes both qualitative and quantitative evaluations and compares the proposed GANs with state-of-the-art approaches. We found that the initial layers of CNNs and final layers are equally crucial for interpreting CNNs upon interpreting the proposed GAN. We believe training a GAN to interpret CNNs would open doors for improved interpretations by leveraging fast-paced deep learning advancements. The code used for experimentation is publicly available at https://github.com/Akash-guna/Explain-CNN-With-GANS
翻译:我们提出一种新颖的方法,培训有条件的基因对抗网络(GAN),以产生对革命神经网络的视觉解释。为了理解有线电视新闻网,GAN接受有线电视新闻网在预测时如何处理图像的信息培训。提供这种信息有两大挑战:如何以可提供给全球网络的方式代表这种信息,以及如何有效地向全球网络提供代表。为了解决这些问题,我们制定了有线电视新闻网架构的适当代表性,方法是累积平均中间判读图。我们还提出了两种替代方法,为GAN提供代表资料,并选择有效的培训战略。我们的方法学习了有线电视新闻网的一般方面,对数据集和CNN结构具有不可知性。研究包括定性和定量评价,并将拟议的GAN与最先进的方法进行比较。我们发现,有线电视新闻网的初始层次和最后层次对解释拟议的GAN也同样重要。我们相信,对有GAN进行解释的GAN的训练将打开改进解释的大门,办法是利用快速和有线电视新闻网/有线新闻网的深层学习进展。