Training recurrent neural networks (RNNs) with backpropagation through time (BPTT) has known drawbacks such as being difficult to capture longterm dependencies in sequences. Successful alternatives to BPTT have not yet been discovered. Recently, BP with synthetic gradients by a decoupled neural interface module has been proposed to replace BPTT for training RNNs. On the other hand, it has been shown that the representations learned with synthetic and real gradients are different though they are functionally identical. In this project, we explore ways of combining synthetic and real gradients with application to neural language modeling tasks. Empirically, we demonstrate the effectiveness of alternating training with synthetic and real gradients after periodic warm restarts on language modeling tasks.
翻译:培训经常性神经网络(RNN),通过时间进行反向反向调整(BPTT)已知的缺点,例如难以按顺序捕捉长期依赖性。 BPTT的成功替代方法尚未发现。 最近,通过一个分解神经界面模块使用合成梯度的BP建议取代BPTT来培训RNT。 另一方面,已经表明,与合成和真实梯度相比,所学的表示方式是不同的,尽管在功能上是相同的。 在这个项目中,我们探索了将合成和真实梯度与神经语言建模任务应用相结合的方法。 偶然的是,我们展示了在语言建模任务定期重温后与合成和真实梯度交替培训的有效性。