This paper presents a new reachability analysis approach to compute interval over-approximations of the output set of feedforward neural networks with input uncertainty. We adapt to neural networks an existing mixed-monotonicity method for the reachability analysis of dynamical systems and apply it to each partial network within the main network. This ensures that the intersection of the obtained results is the tightest interval over-approximation of the output of each layer that can be obtained using mixed-monotonicity on any partial network decomposition. Unlike other tools in the literature focusing on small classes of piecewise-affine or monotone activation functions, the main strength of our approach is its generality: it can handle neural networks with any Lipschitz-continuous activation function. In addition, the simplicity of our framework allows users to very easily add unimplemented activation functions, by simply providing the function, its derivative and the global argmin and argmax of the derivative. Our algorithm is compared to five other interval-based tools (Interval Bound Propagation, ReluVal, Neurify, VeriNet, CROWN) on both existing benchmarks and two sets of small and large randomly generated networks for four activation functions (ReLU, TanH, ELU, SiLU).
翻译:本文展示了一种新的可获取性分析方法, 用以计算进料神经网络进料神经网络输出组的间隔过错, 且具有输入不确定性。 我们适应神经网络时, 将现有的混合分子方法用于动态系统的可获取性分析, 并将其应用到主网络的每个部分网络。 这样可以确保获得的结果的交叉点是最近的间隔点, 也就是通过混合分子在任何部分网络分解上混合分子网络分解方式获得的每个层产出的超度。 与文献中侧重于小类小类的片段式或单体激活功能的其他工具不同, 我们方法的主要优势在于其普遍性: 它能够用任何利普西茨连续激活功能处理神经网络。 此外, 我们框架的简单性使用户能够很容易地添加未使用的激活功能, 仅仅提供函数、 其衍生物以及全球熔度和衍生物的熔度。 我们的算法比其他五种基于间隔的工具( Interval bound propagation, Reluval, Neurify, VeriNet, VerINet, oral-L) 和四大号网络的运行) 。