Score function-based natural language generation (NLG) approaches such as REINFORCE, in general, suffer from low sample efficiency and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a black box and ignore the gradient information. To improve the sample efficiency and reduce the variance of REINFORCE, we propose a novel approach, TaylorGAN, which augments the gradient estimation by off-policy update and the first-order Taylor expansion. This approach enables us to train NLG models from scratch with smaller batch size -- without maximum likelihood pre-training, and outperforms existing GAN-based methods on multiple metrics of quality and diversity. The source code and data are available at https://github.com/MiuLab/TaylorGAN
翻译:以计分函数为基础的自然语言生成(NLG)方法,如REINFORCE(REINFORCE),一般而言,受到低抽样效率和培训不稳定问题的影响,这主要是由于离散空间取样的无差别性质,因此这些方法必须把歧视者当作黑盒,忽略梯度信息。为了提高抽样效率,减少REINFORCE的差异,我们提议一种新颖的方法,即TaylerGAN,它通过非政策更新和第一阶的泰勒扩展来增加梯度估计。这个方法使我们能够从小批量的零到零来培训NLG模型 -- -- 没有最大的可能的培训前,并且超越现有基于GAN的关于质量和多样性的多种衡量标准的方法。源代码和数据见https://github.com/MiuLab/TaylorGAN。