Improving the performance of end-to-end ASR models on long utterances ranging from minutes to hours in length is an ongoing challenge in speech recognition. A common solution is to segment the audio in advance using a separate voice activity detector (VAD) that decides segment boundary locations based purely on acoustic speech/non-speech information. VAD segmenters, however, may be sub-optimal for real-world speech where, e.g., a complete sentence that should be taken as a whole may contain hesitations in the middle ("set an alarm for... 5 o'clock"). We propose to replace the VAD with an end-to-end ASR model capable of predicting segment boundaries in a streaming fashion, allowing the segmentation decision to be conditioned not only on better acoustic features but also on semantic features from the decoded text with negligible extra computation. In experiments on real world long-form audio (YouTube) with lengths of up to 30 minutes, we demonstrate 8.5% relative WER improvement and 250 ms reduction in median end-of-segment latency compared to the VAD segmenter baseline on a state-of-the-art Conformer RNN-T model.
翻译:在长话从分钟到小时的长话上改进端到端 ASR 模型的性能是一项持续的挑战,在语音识别方面是一个持续的挑战。一个共同的解决方案是使用单独的语音活动探测器(VAD)提前将音频部分分割开来,该探测器可以完全根据声言/非声音信息来决定部分边界位置。然而,VAD 片段对于现实世界的演讲来说可能是亚最佳的,例如,完整句子整体应包含中间的犹豫(“设定...5点钟的警报”)。我们提议将VAD 替换为终端到端的ASR 模型,能够以流态方式预测断段界限,使断段决定不仅以更好的声学特征为条件,而且以解码文本的语义特征为条件,但可忽略不计。在真实世界长音频(YouTube)的实验中,长度不超过30分钟,我们展示了8.5%的相对WER改进率和250米的中端阻隔段模型比VADNADS-S-S-STR 基线。