It is still a challenging task to learn a neural text generation model under the framework of generative adversarial networks (GANs) since the entire training process is not differentiable. The existing training strategies either suffer from unreliable gradient estimations or imprecise sentence representations. Inspired by the principle of sparse coding, we propose a SparseGAN that generates semantic-interpretable, but sparse sentence representations as inputs to the discriminator. The key idea is that we treat an embedding matrix as an over-complete dictionary, and use a linear combination of very few selected word embeddings to approximate the output feature representation of the generator at each time step. With such semantic-rich representations, we not only reduce unnecessary noises for efficient adversarial training, but also make the entire training process fully differentiable. Experiments on multiple text generation datasets yield performance improvements, especially in sequence-level metrics, such as BLEU.
翻译:在基因对抗网络(GANs)的框架内学习神经文本生成模型仍然是一项艰巨的任务,因为整个培训过程是无法区分的。现有的培训战略要么受到不可靠的梯度估计,要么有不准确的句子表述。在稀疏编码原则的启发下,我们建议采用SparseGAN,生成语义解释,但句子表达作为歧视者的投入。关键的想法是,我们把嵌入矩阵当作一个过于完整的字典,用极少的选定词嵌入的线性组合来接近每个步骤的生成器的产出特征。有了这种语义丰富的表述,我们不仅减少了高效的对抗培训不必要的噪音,而且使整个培训过程完全可以区分。多文本生成数据集的实验可以带来性能改进,特别是在诸如BLEU等序列级指标方面。