Output reachability and adversarial robustness are among the most relevant safety properties of neural networks. We show that in the context of Message Passing Neural Networks (MPNN), a common Graph Neural Network (GNN) model, formal verification is impossible. In particular, we show that output reachability of graph-classifier MPNN, working over graphs of unbounded size, non-trivial degree and sufficiently expressive node labels, cannot be verified formally: there is no algorithm that answers correctly (with yes or no), given an MPNN, whether there exists some valid input to the MPNN such that the corresponding output satisfies a given specification. However, we also show that output reachability and adversarial robustness of node-classifier MPNN can be verified formally when a limit on the degree of input graphs is given a priori. We discuss the implications of these results, for the purpose of obtaining a complete picture of the principle possibility to formally verify GNN, depending on the expressiveness of the involved GNN models and input-output specifications.
翻译:输出可达性和对抗性强强性是神经网络最相关的安全特性之一。 我们显示,在信息传递神经网络(MPNNN)这一通用的图形神经网络模型(GNN)模型中,正式的核查是不可能的。 特别是,我们显示,图形分类的MPNN在无限制大小、非三边度和足够直观的节点标签上工作的图形可达性无法正式核实:鉴于MPNN, 没有一个算法能够正确回答(是或否), 是否有对MPNNN的有效输入, 使相应的输出符合特定规格。 然而, 我们还表明,当事先给出对输入图的大小的限制时, 节点分类的MPNN的输出可及性和对抗性强性是可以正式核查的。 我们讨论这些结果的影响,以便全面了解正式核实GNNN的可能原则, 取决于所涉GNN模式的清晰度和输入输出规格。