Recognizing a word shortly after it is spoken is an important requirement for automatic speech recognition (ASR) systems in real-world scenarios. As a result, a large body of work on streaming audio-only ASR models has been presented in the literature. However, streaming audio-visual automatic speech recognition (AV-ASR) has received little attention in earlier works. In this work, we propose a streaming AV-ASR system based on a hybrid connectionist temporal classification (CTC)/attention neural network architecture. The audio and the visual encoder neural networks are both based on the conformer architecture, which is made streamable using chunk-wise self-attention (CSA) and causal convolution. Streaming recognition with a decoder neural network is realized by using the triggered attention technique, which performs time-synchronous decoding with joint CTC/attention scoring. For frame-level ASR criteria, such as CTC, a synchronized response from the audio and visual encoders is critical for a joint AV decision making process. In this work, we propose a novel alignment regularization technique that promotes synchronization of the audio and visual encoder, which in turn results in better word error rates (WERs) at all SNR levels for streaming and offline AV-ASR models. The proposed AV-ASR model achieves WERs of 2.0% and 2.6% on the Lip Reading Sentences 3 (LRS3) dataset in an offline and online setup, respectively, which both present state-of-the-art results when no external training data are used.
翻译:在实际世界情景中,对自动语音识别(ASR)系统来说,一个单词在发言后不久就承认了,这是对实时情景中自动语音识别(ASR)系统的重要要求。因此,文献中介绍了大量关于流传只听音频的ASR模型的工作。然而,流传视听自动语音识别(AV-ASR)在先前的作品中很少受到注意。在这项工作中,我们提议以混合连接器时间分类(CTC)/注意神经网络结构为基础,流传AV-ASR系统。音频和视觉编码神经网络都以相近结构为基础,该结构利用粗略自控(CSA)和因果共振动共振动的ASR模型进行流流流流流流,用S/注意评分进行时间同步解调。对于像CTC这样的框架级,音频和视觉编码的同步反应对于联合AVC决定过程至关重要。在这项工作中,我们建议一种新型的校正校正调整技术,用A-RLLL数据在S-R流中将S-直流数据转换成双向,用S-RLA的版本数据转换为S-RLLA的校平的校平的校平级数据。