Training Deep Neural Networks that are robust to norm bounded adversarial attacks remains an elusive problem. While exact and inexact verification-based methods are generally too expensive to train large networks, it was demonstrated that bounded input intervals can be inexpensively propagated from a layer to another through deep networks. This interval bound propagation approach (IBP) not only has improved both robustness and certified accuracy but was the first to be employed on large/deep networks. However, due to the very loose nature of the IBP bounds, the required training procedure is complex and involved. In this paper, we closely examine the bounds of a block of layers composed in the form of Affine-ReLU-Affine. To this end, we propose expected tight bounds (true bounds in expectation), referred to as ETB, which are provably tighter than IBP bounds in expectation. We then extend this result to deeper networks through blockwise propagation and show that we can achieve orders of magnitudes tighter bounds compared to IBP. Furthermore, using a simple standard training procedure, we can achieve impressive robustness-accuracy trade-off on both MNIST and CIFAR10.
翻译:深神经网络对约束性对抗性攻击具有很强的规范性,培训深神经网络仍是一个难以解决的问题。虽然精确和不精确的核查方法通常过于昂贵,无法培训大型网络,但事实证明,封闭性输入间隔可以通过深网络从一层向另一层廉价传播。这种间隔性传播方法不仅提高了稳健性和经认证的准确性,而且是在大型/深网络上首先使用的。然而,由于IMBP界限非常松散,所要求的培训程序十分复杂,而且涉及其中。在本文件中,我们仔细检查了以Affine-ReLU-Affine形式组成的一组层层的界限。为此,我们提出了预期的紧凑性(预期的界限),称为EBPT,比IMP所预期的界限更加紧凑。我们随后通过阻隔式传播将这一结果扩大到更深的网络,表明与IMBP10相比,我们可以达到更紧的尺寸。此外,我们使用简单的标准培训程序,我们可以在MMMIS10和CIFAR10上实现令人印象深刻的稳健性贸易。