Self-supervised pre-trained transformers have improved the state of the art on a variety of speech tasks. Due to the quadratic time and space complexity of self-attention, they usually operate at the level of relatively short (e.g., utterance) segments. In this paper, we study the use of context, i.e., surrounding segments, during fine-tuning and propose a new approach called context-aware fine-tuning. We attach a context module on top of the last layer of a pre-trained model to encode the whole segment into a context embedding vector which is then used as an additional feature for the final prediction. During the fine-tuning stage, we introduce an auxiliary loss that encourages this context embedding vector to be similar to context vectors of surrounding segments. This allows the model to make predictions without access to these surrounding segments at inference time and requires only a tiny overhead compared to standard fine-tuned models. We evaluate the proposed approach using the SLUE and Librilight benchmarks for several downstream tasks: Automatic speech recognition (ASR), named entity recognition (NER), and sentiment analysis (SA). The results show that context-aware fine-tuning not only outperforms a standard fine-tuning baseline but also rivals a strong context injection baseline that uses neighboring speech segments during inference.
翻译:经过自我监督的经过训练的变压器改善了各种演讲任务的最新状态。由于自我注意的时间和空间的复杂性,这些变压器通常在相对短的段段(如发音)水平上运行。在本文中,我们研究了上下文的使用情况,即周围段,在微调期间,使用周围段,并提出了一个称为上下文觉微调的新方法。我们在一个预先训练的模型最后一层之上附加了一个上下文模块,将整个段编码成一个环境嵌入矢量,然后作为最后预测的附加特征。在微调阶段,我们引入了辅助性损失,鼓励这种环境嵌入矢量与周围段的背景矢量类似。这样,模型就可以作出预测,而不能在过后进入周围的段,而只需要一个微小的顶量,与标准的微调模型相比。我们用一些下游任务的SLUE和Librilight基准对拟议方法进行了评估:自动语音识别(ASR)、命名的实体识别(NER)和感官反应分析(SA),在深度演讲阶段中也使用了一个强有力的基线部分。结果调整。结果显示,在基准段中使用了基准段。结果。结果。结果还利用了基准部分。