Semi-parametric Nearest Neighbor Language Models ($k$NN-LMs) have produced impressive gains over purely parametric LMs, by leveraging large-scale neighborhood retrieval over external memory datastores. However, there has been little investigation into adapting such models for new domains. This work attempts to fill that gap and suggests the following approaches for adapting $k$NN-LMs -- 1) adapting the underlying LM (using Adapters), 2) expanding neighborhood retrieval over an additional adaptation datastore, and 3) adapting the weights (scores) of retrieved neighbors using a learned Rescorer module. We study each adaptation strategy separately, as well as the combined performance improvement through ablation experiments and an extensive set of evaluations run over seven adaptation domains. Our combined adaptation approach consistently outperforms purely parametric adaptation and zero-shot ($k$NN-LM) baselines that construct datastores from the adaptation data. On average, we see perplexity improvements of 17.1\% and 16\% for these respective baselines, across domains.
翻译:近邻语言模型半参数(k$NNN-LM)通过利用外部记忆数据存储处的大型社区检索,在纯粹的参数LM中取得了令人印象深刻的收益。然而,对于为新领域调整这种模型,几乎没有调查。这项工作试图填补这一空白,并建议采用以下方法来调整$k$NN-LM(1)基础LM(使用适应器),2 扩大邻里检索范围,增加一个适应数据储存库,3 利用一个有知识的Rescorer模块来调整检索到的邻居的重量(分数)。我们分别研究每项适应战略,以及通过在七个适应领域进行模拟试验和广泛评价来综合改进绩效。我们的综合适应方法始终超越了从适应数据中建立数据储存的纯参数适应和零点(k$NNNN-LM)基线。我们发现,平均而言,不同领域的这些基线的重复性改进为17.1 ⁇ 和16 ⁇ 。