Deep neural networks excel at finding hierarchical representations that solve complex tasks over large data sets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts. We find evidence that the network has learned many object classes that play crucial roles in classifying scene classes. Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes. By analyzing changes made when small sets of units are activated or deactivated, we find that objects can be added and removed from the output scenes while adapting to the context. Finally, we apply our analytic framework to understanding adversarial attacks and to semantic image editing.
翻译:深神经网络在寻找解决大型数据集复杂任务的等级代表方面非常出色。 我们人类如何能理解这些学到的表述? 在这项工作中,我们提出网络解剖,这是一个分析框架,系统识别图像分类和图像生成网络中单个隐藏单元的语义。 首先,我们分析一个经过现场分类培训的进化神经网络,并发现与各种物体概念相匹配的单位。 我们发现有证据表明,这个网络已经学习了许多在区分场景分类中发挥关键作用的物体类别。 其次,我们使用类似的分析方法来分析一个经过培训的基因化对抗网络模型来生成场景。 通过分析小组单元被激活或失效时作出的改变,我们发现在适应上下文时可以在输出场上添加和删除物体。 最后,我们运用我们的解析框架来理解对抗性攻击和语义图像编辑。