Network pruning is an effective measure to alleviate the storage and computational burden of deep neural networks arising from its high overparameterization. Thus raises a fundamental question: How sparse can we prune a deep network without sacrifice on the performance? To address this problem, in this work we'll take a first principles approach, i.e. we directly impose the sparsity constraint on the original loss function and then characterize the necessary and sufficient condition of the sparsity (\textit{which turns out to nearly coincide}) by leveraging the notion of \textit{statistical dimension} in convex geometry. Through this fundamental limit, we're able to identify two key factors that determine the pruning ratio limit, i.e., weight magnitude and network flatness. Generally speaking, the flatter the loss landscape or the smaller the weight magnitude, the smaller pruning ratio. In addition, we provide efficient countermeasures to address the challenges in computing the pruning limit, which involves accurate spectrum estimation of a large-scale and non-positive Hessian matrix. Moreover, through the lens of the pruning ratio threshold, we can provide rigorous interpretations on several heuristics in existing pruning algorithms. Extensive experiments are performed that demonstrate that the our theoretical pruning ratio threshold coincides very well with the experiments. All codes are available at: https://github.com/QiaozheZhang/Global-One-shot-Pruning
翻译:暂无翻译