Semi-parametric models, which augment generation with retrieval, have led to impressive results in language modeling and machine translation, due to their ability to retrieve fine-grained information from a datastore of examples. One of the most prominent approaches, $k$NN-MT, exhibits strong domain adaptation capabilities by retrieving tokens from domain-specific datastores \citep{khandelwal2020nearest}. However, $k$NN-MT requires an expensive retrieval operation for every single generated token, leading to a very low decoding speed (around 8 times slower than a parametric model). In this paper, we introduce a \textit{chunk-based} $k$NN-MT model which retrieves chunks of tokens from the datastore, instead of a single token. We propose several strategies for incorporating the retrieved chunks into the generation process, and for selecting the steps at which the model needs to search for neighbors in the datastore. Experiments on machine translation in two settings, static and ``on-the-fly'' domain adaptation, show that the chunk-based $k$NN-MT model leads to significant speed-ups (up to 4 times) with only a small drop in translation quality.
翻译:半参数模型通过检索来增加生成,在语言建模和机器翻译方面取得了令人印象深刻的成果,因为它们能够从一个数据储存中检索精细的成像信息。最突出的方法之一是$k$NN-MT,它通过从特定域的数据存储器\citep{khandelwal2020earest}中检索质物,展示了强大的域适应能力。然而,$k$NNN-MT需要为每个生成的代物进行昂贵的检索操作,从而导致非常低的解码速度(大约比参数模型慢8倍 ) 。在本文中,我们引入了一个\ textit{chunk-basy} $kunkNNNN-MT 模型,从数据储存处取回大量的代号物,而不是一个符号。我们提出了几项战略,将回收的块块纳入生成过程,并选择模型需要为数据储存的邻里寻找的步骤。在两个设置中,静态和“在飞行域域上”的机器翻译实验,在两个设置中显示一个小的翻译质量为美元至显著速度的模型。