One major drawback of deep neural networks (DNNs) for use in sensitive application domains is their black-box nature. This makes it hard to verify or monitor complex, symbolic requirements. In this work, we present a simple, yet effective, approach to verify whether a trained convolutional neural network (CNN) respects specified symbolic background knowledge. The knowledge may consist of any fuzzy predicate logic rules. For this, we utilize methods from explainable artificial intelligence (XAI): First, using concept embedding analysis, the output of a computer vision CNN is post-hoc enriched by concept outputs; second, logical rules from prior knowledge are fuzzified to serve as continuous-valued functions on the concept outputs. These can be evaluated with little computational overhead. We demonstrate three diverse use-cases of our method on stateof-the-art object detectors: Finding corner cases, utilizing the rules for detecting and localizing DNN misbehavior during runtime, and comparing the logical consistency of DNNs. The latter is used to find related differences between EfficientDet D1 and Mask R-CNN object detectors. We show that this approach benefits from fuzziness and calibrating the concept outputs.
翻译:用于敏感应用领域的深神经网络(DNN)的一个主要缺陷是其黑盒性质,因此难以核查或监测复杂的象征性要求。在这项工作中,我们提出了一个简单而有效的方法,以核实训练有素的神经神经网络(CNN)是否尊重指定的象征性背景知识。知识可能包含任何模糊的上游逻辑规则。为此,我们使用可解释的人工智能(XAI)的方法:首先,使用概念嵌入分析,计算机视觉CNN的输出为概念输出所丰富;其次,先前知识的逻辑规则被模糊化,作为概念输出的持续价值函数。这些功能可以用少量计算间接费用加以评估。我们在最新物体探测器上展示了三种不同的使用情况:查找角落案例,利用规则在运行期间探测和定位 DNN Misbehavor,比较DNNS的逻辑一致性。后者用来发现高效D1 D1 和MAsk RN 对象探测器之间的相关差异。我们展示了这一方法从模糊性和模糊性校准中得益。