Compressing Deep Neural Network (DNN) models to alleviate the storage and computation requirements is essential for practical applications, especially for resource limited devices. Although capable of reducing a reasonable amount of model parameters, previous unstructured or structured weight pruning methods can hardly truly accelerate inference, either due to the poor hardware compatibility of the unstructured sparsity or due to the low sparse rate of the structurally pruned network. Aiming at reducing both storage and computation, as well as preserving the original task performance, we propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration. Weight coefficients of a selected micro-structured block are unified to reduce the storage and computation of the block without changing the neuron connections, which turns to a micro-structured pruning special case when all unified coefficients are set to zero, where neuron connections (hence storage and computation) are completely removed. In addition, we developed an effective training framework based on the alternating direction method of multipliers (ADMM), which converts our complex constrained optimization into separately solvable subproblems. Through iteratively optimizing the subproblems, the desired micro-structure can be ensured with high compression ratio and low performance degradation. We extensively evaluated our method using a variety of benchmark models and datasets for different applications. Experimental results demonstrate state-of-the-art performance.
翻译:减轻储存和计算要求的压缩深神经网络模型(DNN)对于实际应用,特别是资源有限的装置而言,是减少储存和计算要求的关键。虽然能够减少合理的模型参数数量,但以前未结构化或结构化的重量调整方法很难真正加快推论速度,原因有二,原因有二,有二,有二,二,二,二,二,二,二,二,二,二,二,三,三,三,三,三,三,四,四,四,四,四,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,六,七,七,七,七,七,七,七,七,七,十,十,七,十,十,十,七,七,十,十,七,七,七,十,七,七,七,十,十,十,十,七,七,七,七,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,十,