In this work, we propose a new automatic speech recognition (ASR) system based on feature learning and an end-to-end training procedure for air traffic control (ATC) systems. The proposed model integrates the feature learning block, recurrent neural network (RNN), and connectionist temporal classification loss to build an end-to-end ASR model. Facing the complex environments of ATC speech, instead of the handcrafted features, a learning block is designed to extract informative features from raw waveforms for acoustic modeling. Both the SincNet and 1D convolution blocks are applied to process the raw waveforms, whose outputs are concatenated to the RNN layers for the temporal modeling. Thanks to the ability to learn representations from raw waveforms, the proposed model can be optimized in a complete end-to-end manner, i.e., from waveform to text. Finally, the multilingual issue in the ATC domain is also considered to achieve the ASR task by constructing a combined vocabulary of Chinese characters and English letters. The proposed approach is validated on a multilingual real-world corpus (ATCSpeech), and the experimental results demonstrate that the proposed approach outperforms other baselines, achieving a 6.9\% character error rate.
翻译:在这项工作中,我们提出一个新的自动语音识别系统(ASR),其基础是地物学习和空中交通管制系统的端到端培训程序。拟议模型将地物学习区块、经常性神经网络(RNN)和连接器时间分类损失整合为特征学习区块,以构建终端到终端的ASR模型。面对ATC演讲的复杂环境,而不是手动制作的特征,设计了一个学习区块,以从原始波形中提取信息特征,进行声学建模。SincNet和1D演艺区块都用于处理原始波形,其产出与RNNNT层相融合,用于时间建模。由于能够从原始波形中学习演示,拟议的模型可以完全以端到端的方式优化,即从波状到文字。最后,还考虑ATC域的多语种问题,通过建立中文字符和英文字母的综合词汇来完成ASR任务。拟议的方法在多语系实体体(ATCSpeechchch)上验证,在多语言实际系统(ATCSBeech)上验证了拟议的方法,并试验结果显示其他基准率。