When explaining the decisions of deep neural networks, simple stories are tempting but dangerous. Especially in computer vision, the most popular explanation approaches give a false sense of comprehension to its users and provide an overly simplistic picture. We introduce an interactive framework to understand the highly complex decision boundaries of modern vision models. It allows the user to exhaustively inspect, probe, and test a network's decisions. Across a range of case studies, we compare the power of our interactive approach to static explanation methods, showing how these can lead a user astray, with potentially severe consequences.
翻译:在解释深层神经网络的决定时,简单的故事很诱人,但很危险。特别是在计算机的视觉方面,最受欢迎的解释方法给用户一种错误的理解感,提供了过于简单化的图片。我们引入了一个互动框架来理解现代视觉模型高度复杂的决定界限。它让用户能够详尽地检查、探测和测试网络的决定。在一系列案例研究中,我们比较了我们互动方法的力量与静态解释方法,表明这些方法如何导致用户误入歧途,并可能带来严重后果。