Recently, sequence-to-sequence (seq2seq) models with the Transformer architecture have achieved remarkable performance on various conditional text generation tasks, such as machine translation. However, most of them are trained with teacher forcing with the ground truth label given at each time step, without being exposed to incorrectly generated tokens during training, which hurts its generalization to unseen inputs, that is known as the ``exposure bias" problem. In this work, we propose to mitigate the conditional text generation problem by contrasting positive pairs with negative pairs, such that the model is exposed to various valid or incorrect perturbations of the inputs, for improved generalization. However, training the model with naive contrastive learning framework using random non-target sequences as negative examples is suboptimal, since they are easily distinguishable from the correct output, especially so with models pretrained with large text corpora. Also, generating positive examples requires domain-specific augmentation heuristics which may not generalize over diverse domains. To tackle this problem, we propose a principled method to generate positive and negative samples for contrastive learning of seq2seq models. Specifically, we generate negative examples by adding small perturbations to the input sequence to minimize its conditional likelihood, and positive examples by adding large perturbations while enforcing it to have a high conditional likelihood. Such ``hard'' positive and negative pairs generated using our method guides the model to better distinguish correct outputs from incorrect ones. We empirically show that our proposed method significantly improves the generalization of the seq2seq on three text generation tasks - machine translation, text summarization, and question generation.
翻译:最近,与变换器结构相关的序列到序列模型(seq2seq)在各种有条件的文本生成任务(如机器翻译)上取得了显著的成绩。然而,大多数模型都经过教师的训练,在每次步骤中都使用地面真实标签,而没有暴露在训练过程中错误生成的标牌上,这伤害了它向无形输入的概括化,也就是所谓的“探索偏差”问题。在这项工作中,我们建议通过将正对正对对对面与负对面对比来缓解有条件文本生成问题,这样,模型就会暴露在投入的各种有效或不正确的扰动中,从而改进通俗化。然而,在培训模型时,用随机非目标序列的对比性学习框架(作为负面实例)来迫使他们接受地面真实的标签,因为它们很容易与正确的输出区分,特别是模型先于大文本的“探索偏差偏差”问题。此外,要产生正面的范例需要针对特定域的增强型模型,而这种模型可能不会在不同的域上概括化。为了解决这个问题,我们建议一种有原则的方法来产生积极和消极的样本,以便从较精确的后进2至更精确的转换模型中进行较精确的对比学习。我们用一个反的推论式的模型,然后用一个反的推式的推论式的模型,我们产生出一个反的方法,然后用一个反的推论式的推论式的推论式的推论式的推论式的模型,然后用一个反的模型。我们制的模型,然后用一个反的模型,然后用一个反的模型,然后用一个反的推论式的推论式的推论式的推论式的推论式的推论式的推论式的推论式的推论式的推论式的推论式的推的推的推的推论。我们的推式的推的推。