Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non-convex optimization problem, for which several past works have proposed convex relaxations as a promising alternative. However, even for reasonably-sized neural networks, these relaxations are not tractable, and so must be replaced by even weaker relaxations in practice. In this work, we propose a novel operator splitting method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller sub-problems that often have analytical solutions. The method is modular, scales to very large problem instances, and compromises operations that are amenable to fast parallelization with GPU acceleration. We demonstrate our method in bounding the worst-case performance of large convolutional networks in image classification and reinforcement learning settings, and in reachability analysis of neural network dynamical systems.
翻译:分析深神经网络最坏的性能,防止输入扰动,这等于解决了大规模非电离层优化问题,过去的若干著作都提出将松动的松动作为一种有希望的替代方案。然而,即使对合理规模的神经网络来说,这些松动也是无法牵动的,因此必须用实际中甚至更弱的松动来取代。在这项工作中,我们建议一种新型的操作者分解方法,通过将它分为往往具有分析解决方案的小型子问题,直接解决问题以高度准确性松动。这种方法是模块化的,规模到非常大的问题实例,以及易于与GPU加速快速平行平行的妥协操作。我们展示了在图像分类和强化学习环境以及神经网络动态系统的可达性分析中将大型革命网络最坏的性能捆绑起来的方法。