Deep neural networks, especially convolutional deep neural networks, are state-of-the-art methods to classify, segment or even generate images, movies, or sounds. However, these methods lack of a good semantic understanding of what happens internally. The question, why a COVID-19 detector has classified a stack of lung-ct images as positive, is sometimes more interesting than the overall specificity and sensitivity. Especially when human domain expert knowledge disagrees with the given output. This way, human domain experts could also be advised to reconsider their choice, regarding the information pointed out by the system. In addition, the deep learning model can be controlled, and a present dataset bias can be found. Currently, most explainable AI methods in the computer vision domain are purely used on image classification, where the images are ordinary images in the visible spectrum. As a result, there is no comparison on how the methods behave with multimodal image data, as well as most methods have not been investigated on how they behave when used for object detection. This work tries to close the gaps. Firstly, investigating three saliency map generator methods on how their maps differ across the different spectra. This is achieved via accurate and systematic training. Secondly, we examine how they behave when used for object detection. As a practical problem, we chose object detection in the infrared and visual spectrum for autonomous driving. The dataset used in this work is the Multispectral Object Detection Dataset, where each scene is available in the FIR, MIR and NIR as well as visual spectrum. The results show that there are differences between the infrared and visual activation maps. Further, an advanced training with both, the infrared and visual data not only improves the network's output, it also leads to more focused spots in the saliency maps.
翻译:深心神经网络, 特别是深层神经网络, 特别是深层神经网络, 是最先进的分类、 分段或甚至生成图像、 电影或声音的方法。 然而, 这些方法缺乏对内部发生的事情的准确性理解 。 问题在于 COVID-19 探测器为何将一堆肺部镜像归类为正面的, 有时比整体的特性和敏感性更有趣。 特别是当人类域专家知识与给定输出不一致时, 特别是当人类域专家知识与给定的输出不一致时。 这样, 人类域专家也可以被建议重新考虑他们的选择, 有关系统所点的信息。 此外, 深度学习模型可以被控制, 并且可以找到当前的直观偏差。 目前, 计算机视觉域中最可解释的AI 方法纯粹用于图像分类, 图像是可见频谱的普通图像。 因此, 无法比较这些方法与多面图像数据是如何运行的, 以及大多数方法都没有被调查 用于天体探测的。 这项工作只是缩小了距离。 。 首先, 调查三个突出的地图绘制方法, 如何在它们的路径上, 在不同的直观探测中, 我们使用了一个系统的路径中,, 如何使用这个数据是用来显示,, 如何 。