Although neural networks have been applied to several systems in recent years, they still cannot be used in safety-critical systems due to the lack of efficient techniques to certify their robustness. A number of techniques based on convex optimization have been proposed in the literature to study the robustness of neural networks, and the semidefinite programming (SDP) approach has emerged as a leading contender for the robust certification of neural networks. The major challenge to the SDP approach is that it is prone to a large relaxation gap. In this work, we address this issue by developing a sequential framework to shrink this gap to zero by adding non-convex cuts to the optimization problem via disjunctive programming. We analyze the performance of this sequential SDP method both theoretically and empirically, and show that it bridges the gap as the number of cuts increases.
翻译:尽管近年来神经网络已应用于若干系统,但由于缺乏证明其稳健性的有效技术,这些神经网络仍无法用于安全临界系统,文献中提议了一些基于顺流优化的技术,以研究神经网络的稳健性,半无限期编程方法已成为神经网络稳健认证的主要竞争者。对SDP方法的主要挑战在于它容易出现巨大的放松差距。在这项工作中,我们通过制定一个顺序框架来解决这一问题,通过分流编程,将这一差距缩小到零,在优化问题上增加非顺流削减。我们从理论上和实验上分析了这种顺序SDP方法的绩效,并表明随着削减数量的增加,它弥补了差距。