Large neural networks are heavily over-parameterized. This is done because it improves training to optimality. However once the network is trained, this means many parameters can be zeroed, or pruned, leaving an equivalent sparse neural network. We propose renormalizing sparse neural networks in order to improve accuracy. We prove that our method's error converges to 0 as network parameters cluster or concentrate. We prove that without renormalizing, the error does not converge to zero in general. We experiment with our method on real world datasets MNIST, Fashion MNIST, and CIFAR-10 and confirm a large improvement in accuracy with renormalization versus standard pruning.
翻译:大型神经网络严重超分度。 这样做是因为它提高了培训, 使培训达到最佳性。 但是, 一旦培训了网络, 这意味着许多参数可以零化或修剪, 留下一个相当的稀有神经网络。 我们提议重新规范稀有神经网络, 以提高准确性。 我们证明我们的方法错误会以网络参数集成或集中的形式归为0。 我们证明, 没有重新整顿, 错误不会普遍归为零 。 我们用我们的方法在真正的世界数据集 MNIST、 时装 MNIST 和 CIFAR- 10 上进行实验, 并证实与标准运行相比, 重新整顿的准确性有很大提高 。