Designing neural networks with bounded Lipschitz constant is a promising way to obtain certifiably robust classifiers against adversarial examples. However, the relevant progress for the important $\ell_\infty$ perturbation setting is rather limited, and a principled understanding of how to design expressive $\ell_\infty$ Lipschitz networks is still lacking. In this paper, we bridge the gap by studying certified $\ell_\infty$ robustness from a novel perspective of representing Boolean functions. We derive two fundamental impossibility results that hold for any standard Lipschitz network: one for robust classification on finite datasets, and the other for Lipschitz function approximation. These results identify that networks built upon norm-bounded affine layers and Lipschitz activations intrinsically lose expressive power even in the two-dimensional case, and shed light on how recently proposed Lipschitz networks (e.g., GroupSort and $\ell_\infty$-distance nets) bypass these impossibilities by leveraging order statistic functions. Finally, based on these insights, we develop a unified Lipschitz network that generalizes prior works, and design a practical version that can be efficiently trained (making certified robust training free). Extensive experiments show that our approach is scalable, efficient, and consistently yields better certified robustness across multiple datasets and perturbation radii than prior Lipschitz networks. Our code is available at https://github.com/zbh2047/SortNet.
翻译:设计内线网络与Lipschitz常量相联,这是获得任何标准的Lipschitz网络的可靠分类器的一个很有希望的方法。然而,对于重要的 $\ ell ⁇ infty$ perturbation 设置而言,相关的进展相当有限,而且对于如何设计 $\ ell ⁇ infty$ Lipschitz 网络仍然缺乏原则性的理解。在本文中,我们从代表 Boolean 功能的新角度研究经认证的 $\ ⁇ infty$ 稳健性网络,从而弥补了这一差距。我们从任何标准的 Lipschitz 网络中得出了两种基本不可能的结果:一种是对有限数据集进行稳健的分类,另一种是利普西茨 函数近近效。这些结果表明,在受规范约束的直系层和利普西茨 启动的网络本质上丧失了表达力。我们最近提议的Lipschitz 网络(如GroupSortSort and ellivelifty lifty com) 绕过这些不易懂的系统功能。最后,我们在这些洞察看,我们经过精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准的精准性网络。