By forcing at most N out of M consecutive weights to be non-zero, the recent N:M network sparsity has received increasing attention for its two attractive advantages: 1) Promising performance at a high sparsity. 2) Significant speedups on NVIDIA A100 GPUs. Recent studies require an expensive pre-training phase or a heavy dense-gradient computation. In this paper, we show that the N:M learning can be naturally characterized as a combinatorial problem which searches for the best combination candidate within a finite collection. Motivated by this characteristic, we solve N:M sparsity in an efficient divide-and-conquer manner. First, we divide the weight vector into $C_{\text{M}}^{\text{N}}$ combination subsets of a fixed size N. Then, we conquer the combinatorial problem by assigning each combination a learnable score that is jointly optimized with its associate weights. We prove that the introduced scoring mechanism can well model the relative importance between combination subsets. And by gradually removing low-scored subsets, N:M fine-grained sparsity can be efficiently optimized during the normal training phase. Comprehensive experiments demonstrate that our learning best combination (LBC) performs consistently better than off-the-shelf N:M sparsity methods across various networks. Our code is released at \url{https://github.com/zyxxmu/LBC}.
翻译:通过强制在MM的多数连续重量为非零,最近的N:M网络宽度因其两个有吸引力的优势而日益受到关注:(1) 在高度的宽度下,有前景的性能;(2) NVIDIA A100 GPUs上的重大超速。最近的研究需要花费昂贵的培训前阶段或密集度高度梯度计算。在本文中,我们显示N:M学习可以自然地被定性为一个组合问题,在一定的集合中寻找最佳组合候选对象。受此特点的激励,我们以高效的分化和康化方式解决了N:M宽度。首先,我们将重量矢量分解成固定大小的 $Ctext{M ⁇ text{N ⁇ $$+$+美元组合组。然后,我们通过给每种组合分配一个可学习分数,用其相关重量共同优化。我们证明引入的评分机制可以很好地模拟组合子集之间的相对重要性。通过逐渐取消低分子集,N:M精细的软质组合/网络可以更好地在正常的训练阶段里进行学习。