We investigate the complexity of the reachability problem for (deep) neural networks: does it compute valid output given some valid input? It was recently claimed that the problem is NP-complete for general neural networks and specifications over the input/output dimension given by conjunctions of linear inequalities. We recapitulate the proof and repair some flaws in the original upper and lower bound proofs. Motivated by the general result, we show that NP-hardness already holds for restricted classes of simple specifications and neural networks. Allowing for a single hidden layer and an output dimension of one as well as neural networks with just one negative, zero and one positive weight or bias is sufficient to ensure NP-hardness. Additionally, we give a thorough discussion and outlook of possible extensions for this direction of research on neural network verification.
翻译:我们调查了(深)神经网络的可达性问题的复杂性:它是否计算了有效输出量,并提供了一些有效的投入?最近有人声称,对于一般神经网络来说,这个问题是NP问题,对于由于线性不平等的结合而产生的投入/产出层面而言,这个问题是NP问题;我们总结了证据,并修复了原有上下层和下层证据中的一些缺陷;受一般结果的驱使,我们显示NP的硬性已经存在于限制类别的简单规格和神经网络中;允许一个单一的隐藏层和一个输出层面,一个和神经网络的输出层面,只有一个负、零和一个正重或偏差,足以确保NP的硬性;此外,我们透彻地讨论和展望了神经网络核查研究方向的可能扩展。