In this paper, a novel architecture for a deep recurrent neural network, residual LSTM is introduced. A plain LSTM has an internal memory cell that can learn long term dependencies of sequential data. It also provides a temporal shortcut path to avoid vanishing or exploding gradients in the temporal domain. The residual LSTM provides an additional spatial shortcut path from lower layers for efficient training of deep networks with multiple LSTM layers. Compared with the previous work, highway LSTM, residual LSTM separates a spatial shortcut path with temporal one by using output layers, which can help to avoid a conflict between spatial and temporal-domain gradient flows. Furthermore, residual LSTM reuses the output projection matrix and the output gate of LSTM to control the spatial information flow instead of additional gate networks, which effectively reduces more than 10% of network parameters. An experiment for distant speech recognition on the AMI SDM corpus shows that 10-layer plain and highway LSTM networks presented 13.7% and 6.2% increase in WER over 3-layer aselines, respectively. On the contrary, 10-layer residual LSTM networks provided the lowest WER 41.0%, which corresponds to 3.3% and 2.8% WER reduction over plain and highway LSTM networks, respectively.
翻译:在本文中,引入了一个用于深层重复神经网络的新结构,即残余LSTM。普通LSTM有一个内部内存单元格,可以学习相继数据的长期依赖性。它提供了一个时间捷径路径,以避免在时间域内消失或爆炸梯度。残留LSTM提供了从下层增加的空间捷径路径,以有效培训具有多个LSTM层的深层网络。与以前的工作相比,LSTM高速公路,残余LSTM用输出层将空间捷径与时间路段分离,时间路段与时段路路隔开,有助于避免空间与时段梯度流之间的冲突。此外,残余LSTM公司还重新利用LSTM公司的产出预测矩阵和输出门户,以控制空间信息流动,而不是其他门网,从而有效地减少了超过10%的网络参数。在AMI SDMamp上进行的远程语音识别实验显示,10层LSTM和高速公路LSTM网络比3层增加了13.7%和6.2%。相反,LSTM公司网络比平面2.8%和3.8%的网路段分别减少了1.8%和3.3%。