State-of-the-art neural text generation models are typically trained to maximize the likelihood of each token in the ground-truth sequence conditioned on the previous target tokens. However, during inference, the model needs to make a prediction conditioned on the tokens generated by itself. This train-test discrepancy is referred to as exposure bias. Scheduled sampling is a curriculum learning strategy that gradually exposes the model to its own predictions during training to mitigate this bias. Most of the proposed approaches design a scheduler based on training steps, which generally requires careful tuning depending on the training setup. In this work, we introduce Dynamic Scheduled Sampling with Imitation Loss (DySI), which maintains the schedule based solely on the training time accuracy, while enhancing the curriculum learning by introducing an imitation loss, which attempts to make the behavior of the decoder indistinguishable from the behavior of a teacher-forced decoder. DySI is universally applicable across training setups with minimal tuning. Extensive experiments and analysis show that DySI not only achieves notable improvements on standard machine translation benchmarks, but also significantly improves the robustness of other text generation models.
翻译:最先进的神经文本生成模型一般都经过培训,以最大限度地提高地貌真实序列中以先前目标符号为条件的每个符号的可能性。然而,在推断期间,模型需要以自己产生的符号为条件作出预测。这种火车测试差异被称为暴露偏差。排定抽样是一种课程学习战略,在培训期间逐渐使模型暴露于自己的预测中以缓解这种偏差。大多数拟议方法设计一个基于培训步骤的时间安排,通常需要根据培训设置进行仔细调整。在这项工作中,我们引入了动态的排程抽样模拟损失(DySI),仅根据培训时间的准确性来维持时间表,同时通过采用模拟损失来提高课程学习,试图使解码器的行为与教师强迫解密者的行为区分开来。DySI在培训设置中普遍适用,通常需要根据培训设置进行细微调。广泛的实验和分析表明,DySI不仅在标准机器翻译基准上取得了显著的改进,而且还极大地改进了其他生成模型的稳健性。