Non-autoregressive (NAR) generation, which is first proposed in neural machine translation (NMT) to speed up inference, has attracted much attention in both machine learning and natural language processing communities. While NAR generation can significantly accelerate inference speed for machine translation, the speedup comes at the cost of sacrificed translation accuracy compared to its counterpart, auto-regressive (AR) generation. In recent years, many new models and algorithms have been designed/proposed to bridge the accuracy gap between NAR generation and AR generation. In this paper, we conduct a systematic survey with comparisons and discussions of various non-autoregressive translation (NAT) models from different aspects. Specifically, we categorize the efforts of NAT into several groups, including data manipulation, modeling methods, training criterion, decoding algorithms, and the benefit from pre-trained models. Furthermore, we briefly review other applications of NAR models beyond machine translation, such as dialogue generation, text summarization, grammar error correction, semantic parsing, speech synthesis, and automatic speech recognition. In addition, we also discuss potential directions for future exploration, including releasing the dependency of KD, dynamic length prediction, pre-training for NAR, and wider applications, etc. We hope this survey can help researchers capture the latest progress in NAR generation, inspire the design of advanced NAR models and algorithms, and enable industry practitioners to choose appropriate solutions for their applications. The web page of this survey is at \url{https://github.com/LitterBrother-Xiao/Overview-of-Non-autoregressive-Applications}.
翻译:在神经机器翻译(NMT)中首次提出非偏向型(NAR)生成,以加快推断速度,这在机器学习和自然语言处理界都引起了很大的注意。虽然NAR生成可以大大加快机器翻译的推断速度,但加速的代价是翻译准确度的牺牲,而相对于其对应的自动递增型(AR)生成。近年来,设计/提出了许多新的模型和算法,以缩小NAR生成和AR生成之间的准确性差距。在本文中,我们进行了系统调查,比较和讨论各种非视觉翻译(NAT)的不同方面。具体地说,我们将NAT的努力分为若干组,包括数据操作、模型模型、培训标准、解码算算法以及预先培训模型的好处。此外,我们简要审查了NAR模型在机器翻译以外的其他应用,如对话生成、文本总和校正错误校正、语序分析、语音合成和自动语音识别。此外,我们还讨论了NATL努力工作的潜在方向,包括未来勘探、模型的升级和升级前设计应用,并展示了KOIA。