Tight and efficient neural network bounding is crucial to the scaling of neural network verification systems. Many efficient bounding algorithms have been presented recently, but they are often too loose to verify more challenging properties. This is due to the weakness of the employed relaxation, which is usually a linear program of size linear in the number of neurons. While a tighter linear relaxation for piecewise-linear activations exists, it comes at the cost of exponentially many constraints and currently lacks an efficient customized solver. We alleviate this deficiency by presenting two novel dual algorithms: one operates a subgradient method on a small active set of dual variables, the other exploits the sparsity of Frank-Wolfe type optimizers to incur only a linear memory cost. Both methods recover the strengths of the new relaxation: tightness and a linear separation oracle. At the same time, they share the benefits of previous dual approaches for weaker relaxations: massive parallelism, GPU implementation, low cost per iteration and valid bounds at any time. As a consequence, we can obtain better bounds than off-the-shelf solvers in only a fraction of their running time, attaining significant formal verification speed-ups.
翻译:严格而高效的神经网络约束对于神经网络验证系统规模的扩大至关重要。 许多高效的连接算法最近已经出现,但往往过于松散,无法核实更具挑战性的特性。 这是因为使用的松绑程序薄弱, 通常是神经神经元数量中的线性线性程序。 虽然对小线性激活的线性松绑更为严格, 但它的成本是成倍增长的众多限制, 目前缺乏高效定制的求解器。 我们通过推出两种新型的双重算法来缓解这一缺陷: 一种对一小组活跃的双重变量采用次升级方法, 另一种则利用弗兰克- 沃夫型优化器的松散性仅产生线性内存成本。 这两种方法都恢复了新放松的优势: 紧凑和线性分离或触角。 与此同时, 它们分享了以往双向放松的好处: 大规模平行、 通用设备实施、 低渗透成本和随时有效的约束。 因此, 我们能在运行过程中获得比离线性解器更好的约束。