Speech emotion recognition (SER) is the task of recognising human's emotional states from speech. SER is extremely prevalent in helping dialogue systems to truly understand our emotions and become a trustworthy human conversational partner. Due to the lengthy nature of speech, SER also suffers from the lack of abundant labelled data for powerful models like deep neural networks. Pre-trained complex models on large-scale speech datasets have been successfully applied to SER via transfer learning. However, fine-tuning complex models still requires large memory space and results in low inference efficiency. In this paper, we argue achieving a fast yet effective SER is possible with self-distillation, a method of simultaneously fine-tuning a pretrained model and training shallower versions of itself. The benefits of our self-distillation framework are threefold: (1) the adoption of self-distillation method upon the acoustic modality breaks through the limited ground-truth of speech data, and outperforms the existing models' performance on an SER dataset; (2) executing powerful models at different depth can achieve adaptive accuracy-efficiency trade-offs on resource-limited edge devices; (3) a new fine-tuning process rather than training from scratch for self-distillation leads to faster learning time and the state-of-the-art accuracy on data with small quantities of label information.
翻译:语音语音识别(SER)是承认人类言语情感状态的任务。SER在帮助对话系统真正理解我们的情感并成为值得信赖的人类对话伙伴方面非常普遍。由于发言的冗长性质,SER还缺乏大量有标签的强大模型数据,如深神经网络,因此SER也缺乏大量强大的模型数据;大规模语音数据集的事先培训复杂模型通过传输学习成功地应用于SER。然而,微调复杂的模型仍然需要很大的记忆空间,并导致低推论效率。在本文中,我们主张,通过自我蒸馏,可以实现快速而有效的SER,而这种快速而有效的SER则是有可能实现的。同时微调一种经过预先训练的模型和培训更浅的自身版本的方法。我们自我蒸馏框架的优点有三个方面:(1) 在声学模式中采用自我蒸馏方法,由于语音数据有限的地面光度而使现有模型在SER数据集上的性能超过现有模型的性能;(2)在不同的深度上实施强大的模型,可以在资源有限的边缘装置上实现适应准确性交易;(3)用新的精细的准确度调整过程,而不是从对数据进行自学的精度的精度,而不是对自学程度的精度进行自学的精度的精度,而不是对自学的精度的精度的精度的自我升级,以便进行自学。