Despite great recent advances achieved by deep neural networks (DNNs), they are often vulnerable to adversarial attacks. Intensive research efforts have been made to improve the robustness of DNNs; however, most empirical defenses can be adaptively attacked again, and the theoretically certified robustness is limited, especially on large-scale datasets. One potential root cause of such vulnerabilities for DNNs is that although they have demonstrated powerful expressiveness, they lack the reasoning ability to make robust and reliable predictions. In this paper, we aim to integrate domain knowledge to enable robust learning with the reasoning paradigm. In particular, we propose a certifiably robust learning with reasoning pipeline (CARE), which consists of a learning component and a reasoning component. Concretely, we use a set of standard DNNs to serve as the learning component to make semantic predictions, and we leverage the probabilistic graphical models, such as Markov logic networks (MLN), to serve as the reasoning component to enable knowledge/logic reasoning. However, it is known that the exact inference of MLN (reasoning) is #P-complete, which limits the scalability of the pipeline. To this end, we propose to approximate the MLN inference via variational inference based on an efficient expectation maximization algorithm. In particular, we leverage graph convolutional networks (GCNs) to encode the posterior distribution during variational inference and update the parameters of GCNs (E-step) and the weights of knowledge rules in MLN (M-step) iteratively. We conduct extensive experiments on different datasets and show that CARE achieves significantly higher certified robustness compared with the state-of-the-art baselines. We additionally conducted different ablation studies to demonstrate the empirical robustness of CARE and the effectiveness of different knowledge integration.
翻译:尽管深层神经网络(DNN)最近取得了巨大的进步,但它们往往很容易受到对抗性攻击。我们进行了密集的研究努力,以提高DNN的稳健性;然而,大多数经验性防御可以再次进行适应性攻击,理论上经认证的稳健性有限,特别是在大型数据集方面。对于DNN来说,这种脆弱性的一个潜在根源是,尽管它们表现出了强烈的直观性,但它们缺乏作出可靠和可靠的预测的推理能力。在本文中,我们的目标是将域级知识整合起来,以便能够与推理范式进行有力的学习。特别是,我们建议用推理性管道(CARE)进行可靠的可靠学习(CARE),其中包括学习部分和推理性部分。具体地说,我们使用一套标准DNNNNNNN的稳健健性防御性防御性防御性防御性防御性防御性防御性防御性防御力,具体地说,我们使用一套标准性DNNNNN值作为学习的学习组成部分的学习部分。我们利用了不同的预测性图理学模型来进行更强的推理理学。我们所理解的推理性推理性推理性推理的准确性推理,我们用的是MLNRN的精确值的精确值的精确值的精确值的精确性研究, 以显示的精确性判断性测测测测测测测测测测值的精确性测值的精确性测值的精确性测值的精确度, 以显示的精确性测测测测测算法的精确度为根据的精确性测测测算法的精确性测测测测测测测测测算性测测测测测测测测测测测测测测测测测测测测测测的精确性, 。