Verification plays an essential role in the formal analysis of safety-critical systems. Most current verification methods have specific requirements when working on Deep Neural Networks (DNNs). They either target one particular network category, e.g., Feedforward Neural Networks (FNNs), or networks with specific activation functions, e.g., RdLU. In this paper, we develop a model-agnostic verification framework, called DeepAgn, and show that it can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both. Under the assumption of Lipschitz continuity, DeepAgn analyses the reachability of DNNs based on a novel optimisation scheme with a global convergence guarantee. It does not require access to the network's internal structures, such as layers and parameters. Through reachability analysis, DeepAgn can tackle several well-known robustness problems, including computing the maximum safe radius for a given input, and generating the ground-truth adversarial examples. We also empirically demonstrate DeepAgn's superior capability and efficiency in handling a broader class of deep neural networks, including both FNNs, and RNNs with very deep layers and millions of neurons, than other state-of-the-art verification approaches.
翻译:验证在正式分析安全关键系统中发挥着至关重要的作用。目前大多数验证方法在处理深度神经网络(DNN)时有特定要求。它们或者针对一种特定的网络,例如前馈神经网络(FNN),或者网络带有特定的激活函数,例如RdLU。在本文中,我们开发了一个模型无关的验证框架,称为DeepAgn,并表明它可以应用于FNN、循环神经网络(RNN)或两者的混合体。在Lipschitz连续性假设下,DeepAgn基于一种新颖的优化方案进行神经网络的可达性分析,具有全局收敛保证。它不需要访问网络的内部结构,例如层和参数。通过可达性分析,DeepAgn可以解决几个知名的鲁棒性问题,包括计算给定输入最大安全半径和生成真实对抗性示例。我们还通过实验证明,DeepAgn在处理广泛类别的深度神经网络方面具有优越的功能和效率,包括各种FNN和具有数百万神经元的非常深层的RNN,而其他最先进的验证方法则不能胜任。