Interpreting how does deep neural networks (DNNs) make predictions is a vital field in artificial intelligence, which hinders wide applications of DNNs. Visualization of learned representations helps we humans understand the vision of DNNs. In this work, visualized images that can activate the neural network to the target classes are generated by back-propagation method. Here, rotation and scaling operations are applied to introduce the transformation invariance in the image generating process, which we find a significant improvement on visualization effect. Finally, we show some cases that such method can help us to gain insight into neural networks.
翻译:解释深神经网络(DNN)如何作出预测是人工智能的一个重要领域,这阻碍了DNN的广泛应用。对所学表现的可视化有助于我们人类理解DNN的愿景。在这项工作中,通过反光转换方法生成了能够激活目标类别神经网络的可视图像。在这里,采用轮值和缩放操作来引入图像生成过程中的变异,我们发现在视觉效应上有很大的改进。最后,我们展示了一些案例,这种方法可以帮助我们了解神经网络。