Data augmentation is a widely used strategy for training robust machine learning models. It partially alleviates the problem of limited data for tasks like speech emotion recognition (SER), where collecting data is expensive and challenging. This study proposes CopyPaste, a perceptually motivated novel augmentation procedure for SER. Assuming that the presence of emotions other than neutral dictates a speaker's overall perceived emotion in a recording, concatenation of an emotional (emotion E) and a neutral utterance can still be labeled with emotion E. We hypothesize that SER performance can be improved using these concatenated utterances in model training. To verify this, three CopyPaste schemes are tested on two deep learning models: one trained independently and another using transfer learning from an x-vector model, a speaker recognition model. We observed that all three CopyPaste schemes improve SER performance on all the three datasets considered: MSP-Podcast, Crema-D, and IEMOCAP. Additionally, CopyPaste performs better than noise augmentation and, using them together improves the SER performance further. Our experiments on noisy test sets suggested that CopyPaste is effective even in noisy test conditions.
翻译:数据增强是培训稳健的机器学习模式的广泛使用的战略,它部分缓解了语言情绪识别(SER)等任务的数据有限问题,因为收集的数据既昂贵又富有挑战性。本研究报告提议采用CopperPaste,这是SER一种感知驱动的新增强程序。假设中性以外的情绪的存在决定了发言者在录音、情感(情绪E)和中性话语中的总体感觉情感,仍然可以用情感E来标注。我们假设SER的性能可以通过在模型培训中的这些配音来改进。为了核实这一点,三个CopopperPaste方案在两个深层学习模式上进行了测试:一个是独立培训的,另一个是使用XVator模式的传导学习,一个是语音识别模式。我们观察到,所有三个Copplaste方案都提高了在所考虑的所有三个数据集(MSP-Podcastast、Crema-D和IEMOCAP)上的SER性能表现。此外,CopplastePaste的性能比噪音增强能力更好,并同时改进SER的性能测试。