Model size and complexity remain the biggest challenges in the deployment of speech enhancement and separation systems on low-resource devices such as earphones and hearing aids. Although methods such as compression, distillation and quantization can be applied to large models, they often come with a cost on the model performance. In this paper, we provide a simple model design paradigm that explicitly designs ultra-lightweight models without sacrificing the performance. Motivated by the sub-band frequency-LSTM (F-LSTM) architectures, we introduce the group communication (GroupComm), where a feature vector is split into smaller groups and a small processing block is used to perform inter-group communication. Unlike standard F-LSTM models where the sub-band outputs are concatenated, an ultra-small module is applied on all the groups in parallel, which allows a significant decrease on the model size. Experiment results show that comparing with a strong baseline model which is already lightweight, GroupComm can achieve on par performance with 35.6 times fewer parameters and 2.3 times fewer operations.
翻译:尽管压缩、蒸馏和量化等方法可以适用于大型模型,但它们往往伴随着模型性能的成本。在本文中,我们提供了一个简单的模型设计范式,明确设计超轻量级模型而不牺牲性能。我们受到亚频段-LSTM(F-LSTM)结构的驱动,引入了群体通信(GroupComm),其中特性矢量被分成小组群,小加工区块用于进行群体间通信。与标准的F-LSTM模型不同,在这种模型中,子波段产出被组合在一起,一个超小型模块被平行应用到所有组群中,从而使得模型规模的大幅缩小。实验结果表明,与本已较轻的强基线模型相比,GroupCom可以用35.6倍的参数和2.3倍的操作实现相同的性能。