In recent years, Automatic Speech Recognition (ASR) technology has approached human-level performance on conversational speech under relatively clean listening conditions. In more demanding situations involving distant microphones, overlapped speech, background noise, or natural dialogue structures, the ASR error rate is at least an order of magnitude higher. The visual modality of speech carries the potential to partially overcome these challenges and contribute to the sub-tasks of speaker diarisation, voice activity detection, and the recovery of the place of articulation, and can compensate for up to 15dB of noise on average. This article develops AV Taris, a fully differentiable neural network model capable of decoding audio-visual speech in real time. We achieve this by connecting two recently proposed models for audio-visual speech integration and online speech recognition, namely AV Align and Taris. We evaluate AV Taris under the same conditions as AV Align and Taris on one of the largest publicly available audio-visual speech datasets, LRS2. Our results show that AV Taris is superior to the audio-only variant of Taris, demonstrating the utility of the visual modality to speech recognition within the real time decoding framework defined by Taris. Compared to an equivalent Transformer-based AV Align model that takes advantage of full sentences without meeting the real-time requirement, we report an absolute degradation of approximately 3% with AV Taris. As opposed to the more popular alternative for online speech recognition, namely the RNN Transducer, Taris offers a greatly simplified fully differentiable training pipeline. As a consequence, AV Taris has the potential to popularise the adoption of Audio-Visual Speech Recognition (AVSR) technology and overcome the inherent limitations of the audio modality in less optimal listening conditions.
翻译:近年来,自动语音识别技术(ASR)在相对清洁的监听条件下,接近了在对话演讲中的人性水平表现。在涉及远程麦克风、重叠的语音、背景噪音或自然对话结构等更复杂的情况下,ASR错误率至少是一个数量级更高的级别。视觉式的演讲方式有可能部分克服这些挑战,有助于语音分解、语音活动探测和恢复语声的分任务,并能够平均补偿高达15dB的噪音。这篇文章开发了AV 塔里斯,这是一个完全不同的神经网络模型,能够实时解译视听演讲。我们通过连接最近提出的两个视听演讲整合和在线语音识别模式,即AV Align和Taris。我们根据AV Align和Taris在公开提供的最大音频视听语音数据集之一的一个类似条件对AV 塔里斯进行评估。 我们的结果显示,AV 塔里斯比塔里斯的音频变异模式更优于音频变异的变换模式,表明视觉模式的实用性与实时变换语言框架的比较。