This paper introduces scalable, sampling-based algorithms that optimize trained neural networks with ReLU activations. We first propose an iterative algorithm that takes advantage of the piecewise linear structure of ReLU neural networks and reduces the initial mixed-integer optimization problem (MIP) into multiple easy-to-solve linear optimization problems (LPs) through sampling. Subsequently, we extend this approach by searching around the neighborhood of the LP solution computed at each iteration. This scheme allows us to devise a second, enhanced algorithm that reduces the initial MIP problem into smaller, easier-to-solve MIPs. We analytically show the convergence of the methods and we provide a sample complexity guarantee. We also validate the performance of our algorithms by comparing them against state-of-the-art MIP-based methods. Finally, we show computationally how the sampling algorithms can be used effectively to warm-start MIP-based methods.
翻译:本文介绍以RELU 激活来优化经过训练的神经网络的可扩缩的、基于取样的算法。 我们首先提出一种迭代算法,利用ReLU 神经网络的片断线性结构,并通过抽样减少最初混合整数优化问题(MIP),将其转化为多种容易解决的线性优化问题。 随后, 我们通过在每次迭代中计算出LP 解决方案的周围搜索扩展这一方法。 这个方法使我们能够设计出第二个强化算法, 将最初的MIP 问题降低到更小、更容易解决的 MIP 。 我们用分析方式展示方法的趋同性, 并提供样本复杂性保证。 我们还通过比较最先进的MIP 方法来验证我们算法的性能。 最后, 我们用计算法来显示取样算法如何有效地用于热启动MIP 方法 。