Access to external knowledge is essential for many natural language processing tasks, such as question answering and dialogue. Existing methods often rely on a parametric model that stores knowledge in its parameters, or use a retrieval-augmented model that has access to an external knowledge source. Parametric and retrieval-augmented models have complementary strengths in terms of computational efficiency and predictive accuracy. To combine the strength of both approaches, we propose the Efficient Memory-Augmented Transformer (EMAT) -- it encodes external knowledge into a key-value memory and exploits the fast maximum inner product search for memory querying. We also introduce pre-training tasks that allow EMAT to encode informative key-value representations, and to learn an implicit strategy to integrate multiple memory slots into the transformer. Experiments on various knowledge-intensive tasks such as question answering and dialogue datasets show that, simply augmenting parametric models (T5-base) using our method produces more accurate results (e.g., 25.8 -> 44.3 EM on NQ) while retaining a high throughput (e.g., 1000 queries/s on NQ). Compared to retrieval-augmented models, EMAT runs substantially faster across the board and produces more accurate results on WoW and ELI5. Our code and datasets are available at https://github. com/uclnlp/EMAT.
翻译:许多自然语言处理任务,例如回答问题和对话,都离不开外部知识的获取。现有方法往往依赖一个将知识储存在其参数中的参数模型,或使用能够获取外部知识来源的检索增强模型。参数和检索增强模型在计算效率和预测准确性方面具有互补的优势。为了结合这两种方法的力量,我们建议高效的记忆增强变换器(EMAT)将外部知识编码成一个关键值内存,并利用快速的最大内部产品搜索进行记忆查询。我们还引入了培训前任务,使EMAT能够将信息丰富的关键价值表述编码,并学习将多个记忆槽纳入变异器的隐含战略。关于各种问题回答和对话数据集等各种知识密集型任务的实验表明,仅仅用我们的方法增强参数模型(T5-base)就能产生更准确的结果(例如25.8 -44.3 NQ),同时保留高通量(例如,1000次查询/10次关键关键值表达式关键值表达式),同时在NWAT数据库上进行更快的检索。