Learning deeper models is usually a simple and effective approach to improve model performance, but deeper models have larger model parameters and are more difficult to train. To get a deeper model, simply stacking more layers of the model seems to work well, but previous works have claimed that it cannot benefit the model. We propose to train a deeper model with recurrent mechanism, which loops the encoder and decoder blocks of Transformer in the depth direction. To address the increasing of model parameters, we choose to share parameters in different recursive moments. We conduct our experiments on WMT16 English-to-German and WMT14 English-to-France translation tasks, our model outperforms the shallow Transformer-Base/Big baseline by 0.35, 1.45 BLEU points, which is 27.23% of Transformer-Big model parameters. Compared to the deep Transformer(20-layer encoder, 6-layer decoder), our model has similar model performance and infer speed, but our model parameters are 54.72% of the former.
翻译:学习更深的模型通常是改进模型性能的一个简单而有效的方法,但更深的模型则有更大的模型参数,更难培训。要获得更深的模型,仅仅堆放模型的更多层似乎效果良好,但先前的工程却声称它无法使模型受益。我们提议用经常性机制来训练更深的模型,将变形器的编码器和解码器块环绕到深度方向。为了解决模型参数的增加,我们选择在不同递现时刻分享参数。我们进行WMT16英语对德语和WMT14英语对法语翻译任务实验,我们的模型比浅变换器/大基线高出0.35, 1.45 BLEU点,这是变换器-大模型参数的27.23%。与深变换器(20层编码器,6层解码器)相比,我们的模型具有类似的模型性能和推导速度,但我们的模型参数是前者的54.72%。