Inspired by the notion that {\it to copy is easier than to memorize}, in this work, we introduce GNN-LM, which extends the vanilla neural language model (LM) by allowing to reference similar contexts in the entire training corpus. We build a directed heterogeneous graph between an input context and its semantically related neighbors selected from the training corpus, where nodes are tokens in the input context and retrieved neighbor contexts, and edges represent connections between nodes. Graph neural networks (GNNs) are constructed upon the graph to aggregate information from similar contexts to decode the token. This learning paradigm provides direct access to the reference contexts and helps improve a model's generalization ability. We conduct comprehensive experiments to validate the effectiveness of the GNN-LM: GNN-LM achieves a new state-of-the-art perplexity of 14.8 on WikiText-103 (a 4.5 point improvement over its counterpart of the vanilla LM model) and shows substantial improvement on One Billion Word and Enwiki8 datasets against strong baselines. In-depth ablation studies are performed to understand the mechanics of GNN-LM. The code can be found at \url{https://github.com/ShannonAI/GNN-LM}
翻译:在这项工作中,我们引入了GNN-LM,通过允许在整个培训材料中参考类似背景,扩展了香草神经语言模型(LM),从而扩展了香草神经语言模型(LM),我们在整个培训材料中建立了输入背景与其从培训材料中挑选的与语义相关的邻居之间的定向混合图,在输入背景中,节点是符号,取回相邻背景,边缘代表节点之间的连接。图形神经网络(GNNNs)建在图表上,将类似背景的信息汇总起来,以解码符号。这种学习模式直接提供参考背景,并帮助提高模型的概括化能力。我们开展了全面实验,以验证GNN-LM的有效性:GNN-LM在WikitText-103上实现了14.8的新的状态-艺术性混乱(比Vanilla LM模型的对应方值改进4.5个百分点),并展示了“一亿维Word和Enwiki8数据集与强基线之间的重大改进。深入的amburMnal研究是理解GNNS/NM的机械。