Undirected neural sequence models such as BERT (Devlin et al., 2019) have received renewed interest due to their success on discriminative natural language understanding tasks such as question-answering and natural language inference. The problem of generating sequences directly from these models has received relatively little attention, in part because generating from undirected models departs significantly from conventional monotonic generation in directed sequence models. We investigate this problem by proposing a generalized model of sequence generation that unifies decoding in directed and undirected models. The proposed framework models the process of generation rather than the resulting sequence, and under this framework, we derive various neural sequence models as special cases, such as autoregressive, semi-autoregressive, and refinement-based non-autoregressive models. This unification enables us to adapt decoding algorithms originally developed for directed sequence models to undirected sequence models. We demonstrate this by evaluating various handcrafted and learned decoding strategies on a BERT-like machine translation model (Lample & Conneau, 2019). The proposed approach achieves constant-time translation results on par with linear-time translation results from the same undirected sequence model, while both are competitive with the state-of-the-art on WMT'14 English-German translation.
翻译:BERT(Devlin等人,2019年)等无方向神经序列模型等无方向神经序列模型由于成功地完成了诸如问答和自然语言推断等有区别的自然语言理解任务而重新受到关注。从这些模型中直接生成序列的问题相对没有引起多少注意,部分原因是从无方向模型产生的脱离了在定向序列模型中传统的单质生成。我们通过提出一个通用的序列生成模型来调查这一问题,该模型在定向和无方向模型中统一解码。拟议的框架模型是生成过程而不是由此产生的序列,在此框架下,我们作为特例,生成了各种神经序列模型,例如自动递增、半潜移和基于精细细的无偏向模型。这一统一使我们能够调整最初为定向序列模型开发的解码算法,使之与无方向序列模型模型模型模型模型的解码战略(Lample & Conneau,2019年),从而证明这一点。拟议的方法既能实现不断翻译的神经序列模型,又能以具有竞争力的线性顺序翻译结果。