We introduce a pruning algorithm that provably sparsifies the parameters of a trained model in a way that approximately preserves the model's predictive accuracy. Our algorithm uses a small batch of input points to construct a data-informed importance sampling distribution over the network's parameters, and adaptively mixes a sampling-based and deterministic pruning procedure to discard redundant weights. Our pruning method is simultaneously computationally efficient, provably accurate, and broadly applicable to various network architectures and data distributions. Our empirical comparisons show that our algorithm reliably generates highly compressed networks that incur minimal loss in performance relative to that of the original network. We present experimental results that demonstrate our algorithm's potential to unearth essential network connections that can be trained successfully in isolation, which may be of independent interest.
翻译:我们引入了一种修剪算法, 使经过训练的模型的参数以大致保持模型预测准确性的方式扩大范围。 我们的算法使用一小批输入点来构建网络参数的数据知情重要性抽样分布,并适应性地混合一种基于取样和确定性的裁剪程序来抛弃多余的重量。 我们的修剪方法在计算上既有效,又准确,并广泛适用于各种网络架构和数据分布。 我们的经验性比较表明,我们的算法可靠地生成了高度压缩的网络,其性能与原始网络的性能相比损失微乎其微。 我们展示了实验结果,表明我们的算法有可能分离出基本的网络连接,这些连接可以在孤立中成功训练,这或许是独立的。