To rigorously certify the robustness of neural networks to adversarial perturbations, most state-of-the-art techniques rely on a triangle-shaped linear programming (LP) relaxation of the ReLU activation. While the LP relaxation is exact for a single neuron, recent results suggest that it faces an inherent "convex relaxation barrier" as additional activations are added, and as the attack budget is increased. In this paper, we propose a nonconvex relaxation for the ReLU relaxation, based on a low-rank restriction of a semidefinite programming (SDP) relaxation. We show that the nonconvex relaxation has a similar complexity to the LP relaxation, but enjoys improved tightness that is comparable to the much more expensive SDP relaxation. Despite nonconvexity, we prove that the verification problem satisfies constraint qualification, and therefore a Riemannian staircase approach is guaranteed to compute a near-globally optimal solution in polynomial time. Our experiments provide evidence that our nonconvex relaxation almost completely overcome the "convex relaxation barrier" faced by the LP relaxation.
翻译:为了严格证明神经网络对于对抗性扰动的稳健性,大多数最先进的技术都依赖于RELU激活的三角形线性编程松动。虽然LLU松动是一个神经元,但最近的结果表明,随着额外激活的增加,神经网络将面临固有的“隐形放松屏障 ”, 并且随着攻击预算的增加。在本文中,我们提议在半确定性编程放松的低级别限制基础上,对ReLU放松进行非隐形放松。我们实验表明,非convex放松与LP松动具有相似的复杂性,但具有与更昂贵SDP放松相当的更紧紧凑性。尽管不相干,我们证明核查问题满足了限制条件,因此,里伊曼式的楼梯法保证在多边时间里伊曼式的垫轴法可以计算出一种近乎全球最佳的解决方案。我们的非convex放松几乎完全克服了LP放松所面临的“convex放松屏障 ” 。