Text generation is of great importance to many natural language processing applications. However, maximization-based decoding methods (e.g. beam search) of neural language models often lead to degenerate solutions -- the generated text is unnatural and contains undesirable repetitions. Existing approaches introduce stochasticity via sampling or modify training objectives to decrease probabilities of certain tokens (e.g., unlikelihood training). However, they often lead to solutions that lack coherence. In this work, we show that an underlying reason for model degeneration is the anisotropic distribution of token representations. We present a contrastive solution: (i) SimCTG, a contrastive training objective to calibrate the model's representation space, and (ii) a decoding method -- contrastive search -- to encourage diversity while maintaining coherence in the generated text. Extensive experiments and analyses on three benchmarks from two languages demonstrate that our proposed approach outperforms state-of-the-art text generation methods as evaluated by both human and automatic metrics.
翻译:生成文本对于许多自然语言处理应用非常重要。然而,基于最大化的神经语言模型解码方法(如波束搜索)往往会导致溶液退化 -- -- 生成的文本是非自然的,含有不可取的重复。现有方法通过抽样或修改培训目标,引入杂交性,以减少某些象征物的概率(如难得的培训),但它们往往导致缺乏一致性的解决办法。在这项工作中,我们表明模型变形的根本原因是象征物的异地分布。我们提出了一个对比性的解决办法:(一) SimCTG,一个对比性培训目标,以校准模型的表达空间,以及(二)一种解码方法 -- -- 对比性搜索 -- -- 以鼓励多样性,同时保持生成文本的一致性。对两种语言的三种基准的广泛实验和分析表明,我们拟议的方法比由人和自动衡量标准所评价的状态-最先进的文本生成方法要强。