Although masked language models are highly performant and widely adopted by NLP practitioners, they can not be easily used for autoregressive language modelling (next word prediction and sequence probability estimation). We present an LSTM-based autoregressive language model which uses prefix embeddings (from a pretrained masked language model) via fusion (e.g. concatenation) to obtain a richer context representation for language modelling. We find that fusion helps reliably in lowering the perplexity (16.74 $\rightarrow$ 15.80), which is even preserved after a transfer to a dataset from a different domain than the training data. We also evaluate the best-performing fusion model by correlating its next word surprisal estimates with human reading times. Contradicting our expectation, and despite the improvement in perplexity overall, the correlation remains the same as for the baseline model. Lastly, while we focus on language models pre-trained on text as the sources for the fusion, our approach can be possibly extended to fuse any information represented as a fixed-size vector into an auto-regressive language model. These include e.g. sentence external information retrieved for a knowledge base or representations of multi-modal encoders.
翻译:虽然遮盖语言模型表现良好,并被全国语言方案实践者广泛采用,但这些模型无法轻易用于自动递减语言模型(下个单词预测和序列概率估计)。我们展示了一个基于LSTM的自动递减语言模型,该模型通过混合(例如凝聚),使用预先嵌入(预先训练的隐蔽语言模型),获得更丰富的语言模型背景说明。我们发现,聚合有助于可靠地降低(16.74 $\rightrowrowr$ 15.80)的易懂性(16.74 $\rightrowr$ 15.80),在从与培训数据不同的领域转移到数据集后,该模型甚至被保存起来。我们还通过将下一个单词的超大语言估计数与人类阅读时间挂钩来评估最佳聚合模式。我们的期望与整体的超链接性连接,尽管在整体上已有所改善,但其相关性与基线模型相同。最后,尽管我们注重语言模型对文本进行预先培训的文本作为融合的来源,但我们的方法可以扩展,将任何作为固定规模矢量矢量的信息整合成一个自动递增语言模型。我们的方法包括了多种知识模型的外部短语。这些外句。