We present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and nonlinear recurrent update functions by combining sampling, optimization, and Fermat's theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron. Using Prover, we present the first study of certifying a non-trivial use case of recurrent neural networks, namely speech classification. To achieve this, we additionally develop custom abstractions for the non-linear speech preprocessing pipeline. Our evaluation shows that Prover successfully verifies several challenging recurrent models in computer vision, speech, and motion sensor data classification beyond the reach of prior work.
翻译:我们为经常性神经网络提供了一个可扩缩和精确的校验器,称为Prover,它基于两个新理念:(一) 通过将取样、优化和Fermat的理论结合起来,为非直线性经常性更新功能计算一套非直线性和非直线性重复更新功能的多元抽象模型的方法,以及(二) 在将每个神经神经的多个抽象模型结合起来的认证问题的指导下,为抽象改进提供一个基于梯度的基于梯度的从下推算法。我们利用Prover,提出了关于认证经常性神经网络的非三边使用案例,即语音分类的第一份研究报告。为了实现这一点,我们为非直线性语音预处理管道开发了一套定制的抽象模型。我们的评估表明,Prover成功地验证了计算机视觉、语音和运动传感器数据分类中若干具有挑战性的经常性模型,这些模型超出了先前工作的范围。