We can compress a rectifier network while exactly preserving its underlying functionality with respect to a given input domain if some of its neurons are stable. However, current approaches to determine the stability of neurons with Rectified Linear Unit (ReLU) activations require solving or finding a good approximation to multiple discrete optimization problems. In this work, we introduce an algorithm based on solving a single optimization problem to identify all stable neurons. Our approach is on median 183 times faster than the state-of-art method on CIFAR-10, which allows us to explore exact compression on deeper (5 x 100) and wider (2 x 800) networks within minutes. For classifiers trained under an amount of L1 regularization that does not worsen accuracy, we can remove up to 56% of the connections on the CIFAR-10 dataset. The code is available at the following link, https://github.com/yuxwind/ExactCompression.
翻译:如果某些神经元稳定,我们可以压缩一个校正器网络,同时保留其在特定输入领域的基本功能。 然而,目前确定神经元稳定性的方法需要解决或找到一个与多个离散优化问题相当的近似点。 在这项工作中,我们引入了一种基于解决单一优化问题的算法,以识别所有稳定的神经元。我们的方法比CIFAR-10的最新方法高出了183倍,这使我们能够在几分钟内探索更深的(5x100)和更大的(2x800)网络的精确压缩。对于在L1正规化下培训的、不降低精度的分类师,我们可以删除56%的CIFAR-10数据集连接。代码可以在以下链接上找到, https://github.com/yuxwind/ExactCompression。