Emotion recognition is a challenging task due to limited availability of in-the-wild labeled datasets. Self-supervised learning has shown improvements on tasks with limited labeled datasets in domains like speech and natural language. Models such as BERT learn to incorporate context in word embeddings, which translates to improved performance in downstream tasks like question answering. In this work, we extend self-supervised training to multi-modal applications. We learn multi-modal representations using a transformer trained on the masked language modeling task with audio, visual and text features. This model is fine-tuned on the downstream task of emotion recognition. Our results on the CMU-MOSEI dataset show that this pre-training technique can improve the emotion recognition performance by up to 3% compared to the baseline.
翻译:情感识别是一项具有挑战性的任务,原因是在网上贴有标签的数据集有限。自我监督的学习显示,在语言和自然语言等领域,有有限的标签数据集的任务有了改进。 BERT等模型学会了将上下文纳入文字嵌入,这可以改善下游任务的业绩,比如回答问题。在这项工作中,我们将自我监督的培训推广到多模式应用程序。我们学习了多模式的表述方法,使用一个经过音像和文字功能等隐蔽语言建模任务培训的变压器。这个模型对情感识别的下游任务进行了微调。我们在CMU-MOSEI数据集上的结果显示,这种培训前技术可以提高情感识别性能,比基线提高3%。