Text generative models trained via Maximum Likelihood Estimation (MLE) suffer from the notorious exposure bias problem, and Generative Adversarial Networks (GANs) are shown to have potential to tackle this problem. Existing language GANs adopt estimators like REINFORCE or continuous relaxations to model word distributions. The inherent limitations of such estimators lead current models to rely on pre-training techniques (MLE pre-training or pre-trained embeddings). Representation modeling methods which are free from those limitations, however, are seldomly explored because of their poor performance in previous attempts. Our analyses reveal that invalid sampling methods and unhealthy gradients are the main contributors to such unsatisfactory performance. In this work, we present two techniques to tackle these problems: dropout sampling and fully normalized LSTM. Based on these two techniques, we propose InitialGAN whose parameters are randomly initialized in full. Besides, we introduce a new evaluation metric, Least Coverage Rate, to better evaluate the quality of generated samples. The experimental results demonstrate that InitialGAN outperforms both MLE and other compared models. To the best of our knowledge, it is the first time a language GAN can outperform MLE without using any pre-training techniques.
翻译:通过最大隐隐性估计(MLE)培训的文本变色模型存在臭名昭著的暴露偏差问题,而General Adversarial Networks(GANs)已证明有可能解决这一问题。现有语言GANs采用REINFORCE等估算器,或不断放松文字分布模式。这种估算器的内在局限性导致当前模型完全依赖培训前技术(MLE培训前或培训前预先嵌入)。但是,没有这些局限性的代表模型方法很少被探索,因为它们在以往的尝试中表现不佳。我们的分析表明,无效的取样方法和不健康的梯度是造成这种不令人满意的业绩的主要因素。在这项工作中,我们提出了解决这些问题的两种技术:辍学抽样和完全正常化LSTM。基于这两种技术,我们建议MinisterGAN模型的参数完全随机初始化。此外,我们引入了新的评价指标,即最低覆盖率,以更好地评估生成样本的质量。实验结果显示,初始GAN(GAN)优于MLE和其他比较模型。在不使用最佳的GLE技术的情况下,可以进行最佳的学习。