Recent works have shown that interval bound propagation (IBP) can be used to train verifiably robust neural networks. Reseachers observe an intriguing phenomenon on these IBP trained networks: CROWN, a bounding method based on tight linear relaxation, often gives very loose bounds on these networks. We also observe that most neurons become dead during the IBP training process, which could hurt the representation capability of the network. In this paper, we study the relationship between IBP and CROWN, and prove that CROWN is always tighter than IBP when choosing appropriate bounding lines. We further propose a relaxed version of CROWN, linear bound propagation (LBP), that can be used to verify large networks to obtain lower verified errors than IBP. We also design a new activation function, parameterized ramp function (ParamRamp), which has more diversity of neuron status than ReLU. We conduct extensive experiments on MNIST, CIFAR-10 and Tiny-ImageNet with ParamRamp activation and achieve state-of-the-art verified robustness. Code and the appendix are available at https://github.com/ZhaoyangLyu/VerifiablyRobustNN.
翻译:最近的工作表明,可以使用间隔约束传播(IMBP)来训练可核实的强大神经网络。Researchers观察了这些IMB培训的网络上一种令人感兴趣的现象:CROWN,一种基于严格的线性放松的捆绑方法,常常给这些网络带来非常松散的界限。我们还注意到,大多数神经元在IMB培训过程中死亡,这可能会损害网络的代表性。在本文中,我们研究IMB和CROWN之间的关系,并证明CROWN在选择适当的捆绑线时总是比IMBP更紧。我们进一步提议一个宽松的CROWN,线性捆绑传播(LBP)版本,可用于核查大型网络获得比IMBP更低的经核实错误。我们还设计了一个新的激活功能,即参数化坡道功能(ParamRamp),它比ReLU具有更大的神经状态多样性。我们用ParamRamp激活MIT、CIFAR-10和Tiny-ImageNet进行广泛的实验,并实现国家技术验证的坚固性。代码和附录可在 https://Zimuburva/Zyubrus/Zurvabrbrus。