Numerous compression and acceleration strategies have achieved outstanding results on classification tasks in various fields, such as computer vision and speech signal processing. Nevertheless, the same strategies have yielded ungratified performance on regression tasks because the nature between these and classification tasks differs. In this paper, a novel sign-exponent-only floating-point network (SEOFP-NET) technique is proposed to compress the model size and accelerate the inference time for speech enhancement, a regression task of speech signal processing. The proposed method compressed the sizes of deep neural network (DNN)-based speech enhancement models by quantizing the fraction bits of single-precision floating-point parameters during training. Before inference implementation, all parameters in the trained SEOFP-NET model are slightly adjusted to accelerate the inference time by replacing the floating-point multiplier with an integer-adder. For generalization, the SEOFP-NET technique is introduced to different speech enhancement tasks in speech signal processing with different model architectures under various corpora. The experimental results indicate that the size of SEOFP-NET models can be significantly compressed by up to 81.249% without noticeably downgrading their speech enhancement performance, and the inference time can be accelerated to 1.212x compared with the baseline models. The results also verify that the proposed SEOFP-NET can cooperate with other efficiency strategies to achieve a synergy effect for model compression. In addition, the just noticeable difference (JND) was applied to the user study experiment to statistically analyze the effect of speech enhancement on listening. The results indicate that the listeners cannot facilely differentiate between the enhanced speech signals processed by the baseline model and the proposed SEOFP-NET.
翻译:大量压缩和加速战略在诸如计算机视觉和语音信号处理等各个领域的分类任务方面取得了杰出的成果。然而,同样的战略在回归任务方面也产生了超值性能,因为这些任务与分类任务之间性质不同。在本文件中,提出了一个新的标志-Explent- only浮点网络(SEOFP-NET)技术,以压缩模型大小,加快语音增强的推导时间,这是语音信号处理的回归任务。拟议方法通过对培训中单精度浮点参数的分位进行量化,从而压缩了深层神经网络(DNNN)的语音增强模型的大小。在推断执行之前,经过培训的SEOFP-NET模型的所有参数都略有调整,以便通过用整数添加器替换浮点乘数增增音率来加快推力时间。 关于一般化,SEOFP-NET技术被引入了不同语音信号处理的语音信号增强任务,在不同的模型结构下,实验结果显示SEOFP-Net模型的大小可以大大压缩到812.49%的数值,在升级中可以将S-lavelyeral 递增速度。