Multi-step ahead prediction in language models is challenging due to the discrepancy between training and test time processes. At test time, a sequence predictor is required to make predictions given past predictions as the input, instead of the past targets that are provided during training. This difference, known as exposure bias, can lead to the compounding of errors along a generated sequence at test time. To improve generalization in neural language models and address compounding errors, we propose \textit{Nearest-Neighbor Replacement Sampling} -- a curriculum learning-based method that gradually changes an initially deterministic teacher policy to a stochastic policy. A token at a given time-step is replaced with a sampled nearest neighbor of the past target with a truncated probability proportional to the cosine similarity between the original word and its top $k$ most similar words. This allows the learner to explore alternatives when the current policy provided by the teacher is sub-optimal or difficult to learn from. The proposed method is straightforward, online and requires little additional memory requirements. We report our findings on two language modelling benchmarks and find that the proposed method further improves performance when used in conjunction with scheduled sampling.
翻译:语言模型的多步预测由于培训和测试时间过程之间的差异而具有挑战性。在测试时间,需要序列预测员将过去的预测作为输入,而不是培训期间提供的以往目标。这种差异被称为暴露偏差,可能导致测试时间生成序列错误的复合。为了改进神经语言模型的概括化和解决复合错误,我们提议了\ textit{Nearest-Leghbor 替换抽样} -- -- 基于课程学习的方法,该方法逐渐将最初的确定性教师政策改变为随机政策。一个特定时间步骤的标志被抽样的过去目标的近邻所取代,该标志与最初的单词和最上方最相似的单词之间截断的概率成正比。这样可以让学习者在教师目前提供的政策是亚于最佳或难以从中学习时探索替代办法。拟议的方法简单明了、在线且不需要多少额外的记忆要求。我们报告我们关于两个语言建模基准的调查结果,并发现拟议方法在与预定的取样时会进一步改进。