We focus on addressing the dense backward propagation issue for training efficiency of N:M fine-grained sparsity that preserves at most N out of M consecutive weights and achieves practical speedups supported by the N:M sparse tensor core. Therefore, we present a novel method of Bi-directional Masks (Bi-Mask) with its two central innovations in: 1) Separate sparse masks in the two directions of forward and backward propagation to obtain training acceleration. It disentangles the forward and backward weight sparsity and overcomes the very dense gradient computation. 2) An efficient weight row permutation method to maintain performance. It picks up the permutation candidate with the most eligible N:M weight blocks in the backward to minimize the gradient gap between traditional uni-directional masks and our bi-directional masks. Compared with existing uni-directional scenario that applies a transposable mask and enables backward acceleration, our Bi-Mask is experimentally demonstrated to be more superior in performance. Also, our Bi-Mask performs on par with or even better than methods that fail to achieve backward acceleration. Project of this paper is available at \url{https://github.com/zyxxmu/Bi-Mask}.
翻译:我们的重点是解决N:M:微微微偏差的培养效率的密集落后传播问题,因为N:M:稀疏沙粒岩岩岩岩岩层在M连续的重量中保留了最多N值,并实现了由N:M:稀疏沙粒岩岩岩芯支持的实际加速。因此,我们提出了一个双向遮罩(Bi-Mask)的新颖方法,其两个中心创新内容是:(1)在前向和后向传播的两个方向上单独遮罩,以获得培训加速;它分解前向和后向重量的散变,并克服非常密集的梯度计算。(2) 保持性能的高效加权行调整方法。它以最合格的N:M:负重块在后方采集变换对象,以尽量减少传统的单向遮罩和我们的双向面遮罩之间的梯度差距。与现有的单向遮罩的单向遮罩相比,我们的Bi-Mask将实验性地显示,在性能上更优越。此外,我们的Bi-Mask在无法实现后向加速的方法上或更好。