We investigate the complexity of the reachability problem for (deep) neural networks: does it compute valid output given some valid input? It was recently claimed that the problem is NP-complete for general neural networks and specifications over the input/output dimension given by conjunctions of linear inequalities. We recapitulate the proof and repair some flaws in the original upper and lower bound proofs. Motivated by the general result, we show that NP-hardness already holds for restricted classes of simple specifications and neural networks. Allowing for a single hidden layer and an output dimension of one as well as neural networks with just one negative, zero and one positive weight or bias is sufficient to ensure NP-hardness. Additionally, we give a thorough discussion and outlook of possible extensions for this direction of research on neural network verification.
翻译:我们研究(深度)神经网络的可达性问题的复杂度:是否在给定一些有效输入时,能够计算出有效输出结果?最近有人声称对于一般的神经网络和给定线性不等式的合取规范,该问题是NP完全的。我们重温证明,并修复了原始上下界证明中的一些缺陷。在一般结果的基础上,我们展示了在简单规范和神经网络的受限类上,NP难度已经成立。只要在神经网络中允许一个隐藏层和一个输出维度,以及只有一个负数、零和一个正数的权重或偏置,即可确保NP难度。此外,我们全面讨论了神经网络验证研究方向的可能扩展和前景。