Neural networks are a powerful class of non-linear functions. However, their black-box nature makes it difficult to explain their behaviour and certify their safety. Abstraction techniques address this challenge by transforming the neural network into a simpler, over-approximated function. Unfortunately, existing abstraction techniques are slack, which limits their applicability to small local regions of the input domain. In this paper, we propose Global Interval Neural Network Abstractions with Center-Exact Reconstruction (GINNACER). Our novel abstraction technique produces sound over-approximation bounds over the whole input domain while guaranteeing exact reconstructions for any given local input. Our experiments show that GINNACER is several orders of magnitude tighter than state-of-the-art global abstraction techniques, while being competitive with local ones.
翻译:神经网络是一个强大的非线性功能类别。 但是,它们的黑盒性质使得很难解释它们的行为并证明它们的安全性。 抽象技术通过将神经网络转换成更简单、更接近的功能来应对这一挑战。 不幸的是,现有的抽象技术很松懈,限制了它们适用于输入域的局部小区域。 在本文中,我们建议全球间神经网络与中心-具体重建(GINNACER)一起进行全球间神经网络抽象化。 我们的新型抽象技术在整个输入域产生声音过度一致的界限,同时保证对给定的本地投入进行精确的重建。 我们的实验显示,GINNACER比最先进的全球抽象技术更接近几个数量级,同时与当地技术竞争。