We investigate the complexity of the reachability problem for (deep) neural networks: does it compute valid output given some valid input? It was recently claimed that the problem is NP-complete for general neural networks and conjunctive input/output specifications. We repair some flaws in the original upper and lower bound proofs. We then show that NP-hardness already holds for restricted classes of simple specifications and neural networks with just one layer, as well as neural networks with minimal requirements on the occurring parameters.
翻译:我们调查(深度)神经网络的可达性问题的复杂性:它是否计算出有效的输出,并给出一些有效的输入?最近有人声称,对于一般神经网络和组合输入/输出规格来说,这个问题是NP问题已经完成的。我们修补了原有的上下下限证据中的一些缺陷。然后我们证明,NP的硬性已经维持在限制的单层简单规格和神经网络中,只有一层,神经网络对发生参数的要求最低。