Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks. Maximum-Likelihood (MLE) models trained with teacher forcing have constantly been reported as weak baselines, where poor performance is attributed to exposure bias; at inference time, the model is fed its own prediction instead of a ground-truth token, which can lead to accumulating errors and poor samples. This line of reasoning has led to an outbreak of adversarial based approaches for NLG, on the account that GANs do not suffer from exposure bias. In this work, wake make several surprising observations with contradict common beliefs. We first revisit the canonical evaluation framework for NLG, and point out fundamental flaws with quality-only evaluation: we show that one can outperform such metrics using a simple, well-known temperature parameter to artificially reduce the entropy of the model's conditional distributions. Second, we leverage the control over the quality / diversity tradeoff given by this parameter to evaluate models over the whole quality-diversity spectrum, and find MLE models constantly outperform the proposed GAN variants, over the whole quality-diversity space. Our results have several implications: 1) The impact of exposure bias on sample quality is less severe than previously thought, 2) temperature tuning provides a better quality / diversity trade off than adversarial training, while being easier to train, easier to cross-validate, and less computationally expensive.
翻译:具有足够多样性的高质量文本的产生是一系列广泛的自然语言生成(NLG)任务的关键。经过师力强力培养的最大优势模型一直被报告为薄弱的基线,其中表现不佳的原因是暴露偏差;在推论时间,模型被输入自己的预测,而不是地面真相符号,这可能导致累积错误和差样。这一推理路线导致NLG爆发基于对抗性的对抗性方法,理由是GAN并不受到接触偏差的影响。在这项工作中,发现一些与共同信念相矛盾的令人惊讶的观察。我们首先重新审视了NLG的罐头评估框架,并指出了质量评估的根本缺陷:我们显示,一个人可以使用简单、众所周知的温度参数超越这种基准,人为地减少模型有条件分布的诱变。第二,我们利用这一参数对质量/多样性交易质量/多样性的控制权来评估整个质量多样性模型,发现MLE模型不断超越拟议的GAN质量评估框架,而以前的质量变异性在质量评估上比整个质量评估要低。