In this paper, we consider the computational complexity of formally verifying the behavior of Rectified Linear Unit (ReLU) Neural Networks (NNs), where verification entails determining whether the NN satisfies convex polytopic specifications. Specifically, we show that for two different NN architectures -- shallow NNs and Two-Level Lattice (TLL) NNs -- the verification problem with (convex) polytopic constraints is polynomial in the number of neurons in the NN to be verified, when all other aspects of the verification problem held fixed. We achieve these complexity results by exhibiting explicit (but similar) verification algorithms for each type of architecture. Both algorithms efficiently translate the NN parameters into a partitioning of the NN's input space by means of hyperplanes; this has the effect of partitioning the original verification problem into polynomially many sub-verification problems derived from the geometry of the neurons. We show that these sub-problems may be chosen so that the NN is purely affine within each, and hence each sub-problem is solvable in polynomial time by means of a Linear Program (LP). Thus, a polynomial-time algorithm for the original verification problem can be obtained using known algorithms for enumerating the regions in a hyperplane arrangement. Finally, we adapt our proposed algorithms to the verification of dynamical systems, specifically when these NN architectures are used as state-feedback controllers for LTI systems. We further evaluate the viability of this approach numerically.
翻译:在本文中,我们考虑了正式核实校正线性单位神经网络(ReLU)的行为的计算复杂性。 校正线性单位( ReLU)神经网络( NNS) 的计算复杂性, 核查需要确定 NN 是否满足 convex 多元性规格。 具体地说, 我们展示了两种不同的 NN 结构结构 -- -- 浅 NN 和双级 Lattice (TLL) NN 结构 -- -- 与( convex) 多位性约束有关的核查问题, 在于NNN 神经的神经数量, 需要核实的所有其他方面都得到核实。 我们通过展示明确的( 但相似的)核查算法来实现这些复杂性结果。 两种算法都有效地将NNN参数转化为 NN 输入空间的分区, 浅度 NN 和 双级的 Lattice (TLL) NN 国家结构; 其作用是将最初的核查问题分解成多个子核查问题, 从神经的测算法中可以选择这些子质性方法, 我们的NNNE是每个结构内部的直径直线性系统, 因此, 将每个亚调算法的计算法的计算方法可以被理解一个已知的计算。