Improving the efficiency of neural network inference is undeniably important in a time where commercial use of AI models increases daily. Node pruning is the art of removing computational units such as neurons, filters, attention heads, or even entire layers to significantly reduce inference time while retaining network performance. In this work, we propose the projection of unit activations to an orthogonal subspace in which there is no redundant activity and within which we may prune nodes while simultaneously recovering the impact of lost units via linear least squares. We furthermore show that the order in which units are orthogonalized can be optimized to maximally rank units by their redundancy. Finally, we leverage these orthogonal subspaces to automatically determine layer-wise pruning ratios based upon the relative scale of node activations in our subspace, equivalent to cumulative variance. Our method matches or exceeds state-of-the-art pruning results on ImageNet-trained VGG-16, ResNet-50 and DeiT models while simultaneously having up to 24x lower computational cost than alternative methods. We also demonstrate that this method can be applied in a one-shot manner to OPT LLM models, again outperforming competing methods.
翻译:暂无翻译