Despite strong performance in many sequence-to-sequence tasks, autoregressive models trained with maximum likelihood estimation suffer from exposure bias, i.e. the discrepancy between the ground-truth prefixes used during training and the model-generated prefixes used at inference time. Scheduled sampling is a simple and empirically successful approach which addresses this issue by incorporating model-generated prefixes into training. However, it has been argued that it is an inconsistent training objective leading to models ignoring the prefixes altogether. In this paper, we conduct systematic experiments and find that scheduled sampling, while it ameliorates exposure bias by increasing model reliance on the input sequence, worsens performance when the prefix at inference time is correct, a form of catastrophic forgetting. We propose to use Elastic Weight Consolidation to better balance mitigating exposure bias with retaining performance. Experiments on four IWSLT'14 and WMT'14 translation datasets demonstrate that our approach alleviates catastrophic forgetting and significantly outperforms maximum likelihood estimation and scheduled sampling baselines.
翻译:尽管在许多顺序至顺序任务中表现良好,但经过最有可能估计的自动递减模型受到接触偏差的影响,即培训期间使用的地面真相前缀与在推理时使用的模型产生的前缀之间的差异。排定抽样是一种简单、经验上成功的方法,通过将模型产生的前缀纳入培训来解决这一问题。然而,有人认为,这是一个不一致的培训目标,导致模型完全忽略前缀。在本文中,我们进行系统实验,发现排定的取样会通过增加对输入序列的模型依赖来改善接触偏差,在推理时间前缀正确时会恶化性能,这是一种灾难性的遗忘形式。我们提议使用“弹性视觉整合”来更好地平衡与保持性能之间的暴露偏差。关于IWSLT'14和WMT'14翻译数据集的实验表明,我们的方法减轻了灾难性的遗忘,大大超出最大的可能性估计和预定的取样基线。