For sequence generation, both autoregressive models and non-autoregressive models have been developed in recent years. Autoregressive models can achieve high generation quality, but the sequential decoding scheme causes slow decoding speed. Non-autoregressive models accelerate the inference speed with parallel decoding, while their generation quality still needs to be improved due to the difficulty of modeling multi-modalities in data. To address the multi-modality issue, we propose Diff-Glat, a non-autoregressive model featured with a modality diffusion process and residual glancing training. The modality diffusion process decomposes the modalities and reduces the modalities to learn for each transition. And the residual glancing sampling further smooths the modality learning procedures. Experiments demonstrate that, without using knowledge distillation data, Diff-Glat can achieve superior performance in both decoding efficiency and accuracy compared with the autoregressive Transformer.
翻译:对于序列生成,近年来已经开发了自动递减模型和非自动递减模型。自动递减模型可以达到高生成质量,但顺序解码计划导致缓慢解码速度。非自动递减模型加快推论速度,同时同时解码,而由于数据中多模式建模的难度,其生成质量仍有待改进。为了解决多模式问题,我们提议Diff-Glat,这是一个非自动递减模型,其特点是模式扩散进程和残余结晶训练。模式扩散进程解构模式,并减少每次过渡学习的模式。残留的浮化取样进一步平滑模式学习程序。实验表明,不使用知识蒸馏数据,Diff-Glat可以在解码效率和准确性两方面都达到优异于自动递减变变器的更高性能。