The recurrent neural network transducer (RNN-T) is a prominent streaming end-to-end (E2E) ASR technology. In RNN-T, the acoustic encoder commonly consists of stacks of LSTMs. Very recently, as an alternative to LSTM layers, the Conformer architecture was introduced where the encoder of RNN-T is replaced with a modified Transformer encoder composed of convolutional layers at the frontend and between attention layers. In this paper, we introduce a new streaming ASR model, Convolutional Augmented Recurrent Neural Network Transducers (ConvRNN-T) in which we augment the LSTM-based RNN-T with a novel convolutional frontend consisting of local and global context CNN encoders. ConvRNN-T takes advantage of causal 1-D convolutional layers, squeeze-and-excitation, dilation, and residual blocks to provide both global and local audio context representation to LSTM layers. We show ConvRNN-T outperforms RNN-T, Conformer, and ContextNet on Librispeech and in-house data. In addition, ConvRNN-T offers less computational complexity compared to Conformer. ConvRNN-T's superior accuracy along with its low footprint make it a promising candidate for on-device streaming ASR technologies.
翻译:神经网络中继器( RNNN-T) 是一条突出的流成端对端 ASR 技术。 在 RNNN-T 中, 声学编码器通常由LSTM的堆叠组成。 最近, 作为 LSTM 层的替代, 引入了连接结构, 将 RNN- T 的编码器替换成由前端和关注层之间的共变层组成的变换器编码器。 在本文中, 我们引入了一个新的流成ASR 模型, 革命增强的经常神经网络转换器( CONNNN- T), 其中我们用由本地和全球范围的CNNNN 编码组成的新的革命前端来增强基于LSTM 的RNNNNNT 的网络。 ConRNNNNT-T 利用因果1- 层层、 挤压和Excl, 和剩余区块来向LSTM 层提供全球和地方的音频背景代表。 我们展示了CON-NF-R 和CNF Creal Creal Creal Creal 数据的升级, 和CNF-CR 上较不那么的升级数据。