Semi-parametric models, which augment generation with retrieval, have led to impressive results in language modeling and machine translation, due to their ability to leverage information retrieved from a datastore of examples. One of the most prominent approaches, $k$NN-MT, has an outstanding performance on domain adaptation by retrieving tokens from a domain-specific datastore \citep{khandelwal2020nearest}. However, $k$NN-MT requires retrieval for every single generated token, leading to a very low decoding speed (around 8 times slower than a parametric model). In this paper, we introduce a \textit{chunk-based} $k$NN-MT model which retrieves chunks of tokens from the datastore, instead of a single token. We propose several strategies for incorporating the retrieved chunks into the generation process, and for selecting the steps at which the model needs to search for neighbors in the datastore. Experiments on machine translation in two settings, static domain adaptation and ``on-the-fly'' adaptation, show that the chunk-based $k$NN-MT model leads to a significant speed-up (up to 4 times) with only a small drop in translation quality.
翻译:半参数模型通过检索来增加生成,在语言建模和机器翻译方面取得了令人印象深刻的成果,这是因为它们有能力利用从一个数据储存中提取的信息。最突出的方法之一是$k$NN-MT,它通过从一个特定域的数据存储器\citep{khandelwal202020earest}中检索批号,在领域适应方面表现突出。然而,$k$NNN-MT需要为每个生成的标牌检索,导致非常低的解码速度(大约比参数模型慢8倍 ) 。在本文中,我们引入了一个\ textit{chunk-basy} $k$k$NNNN-MT 模型,从数据储存器取回批号的批号,而不是一个单一标号。我们提出了几项战略,将回收的批号块纳入生成过程,并选择模型在数据储存区寻找邻居所需的步骤。实验了两种设置的机器翻译、静域调整和“天体调整”模型,显示以小块基 $k$NNMT 质量模型只有一个显著的速度转换。