Methods to certify the robustness of neural networks in the presence of input uncertainty are vital in safety-critical settings. Most certification methods in the literature are designed for adversarial or worst-case inputs, but researchers have recently shown a need for methods that consider random input noise. In this paper, we examine the setting where inputs are subject to random noise coming from an arbitrary probability distribution. We propose a robustness certification method that lower-bounds the probability that network outputs are safe. This bound is cast as a chance-constrained optimization problem, which is then reformulated using input-output samples to make the optimization constraints tractable. We develop sufficient conditions for the resulting optimization to be convex, as well as on the number of samples needed to make the robustness bound hold with overwhelming probability. We show for a special case that the proposed optimization reduces to an intuitive closed-form solution. Case studies on synthetic, MNIST, and CIFAR-10 networks experimentally demonstrate that this method is able to certify robustness against various input noise regimes over larger uncertainty regions than prior state-of-the-art techniques.
翻译:文献中的大多数认证方法都是针对对抗性或最坏情况投入的,但研究人员最近表明需要考虑随机输入噪音的方法。在本文件中,我们审查了投入因任意概率分布而受随机噪音影响的设置。我们提出了一个稳健性认证方法,该方法将网络产出安全的可能性限制在较低范围内。这一约束被作为一种受机会限制的优化问题,然后利用输入输出样本进行重新配置,使优化限制成为可移动的。我们为由此产生的优化创造了足够的条件,使强性约束保持在绝对概率之下所需的样本数量。我们为一个特殊案例表明,拟议的优化将减少为直觉封闭式解决方案。关于合成、MNIST和CIFAR-10网络的案例研究实验性地证明,该方法能够证明相对于先前的状态技术而言,相对于更大的不确定性区域而言,各种输入噪音制度是稳健的。