Text generation tasks, including translation, summarization, language models, and etc. see rapid growth during recent years. Despite the remarkable achievements, the repetition problem has been observed in nearly all text generation models undermining the generation performance extensively. To solve the repetition problem, many methods have been proposed, but there is no existing theoretical analysis to show why this problem happens and how it is resolved. In this paper, we propose a new framework for theoretical analysis for the repetition problem. We first define the Average Repetition Probability (ARP) to characterize the repetition problem quantitatively. Then, we conduct an extensive analysis of the Markov generation model and derive several upper bounds of the average repetition probability with intuitive understanding. We show that most of the existing methods are essentially minimizing the upper bounds explicitly or implicitly. Grounded on our theory, we show that the repetition problem is, unfortunately, caused by the traits of our language itself. One major reason is attributed to the fact that there exist too many words predicting the same word as the subsequent word with high probability. Consequently, it is easy to go back to that word and form repetitions and we dub it as the high inflow problem. Furthermore, we derive a concentration bound of the average repetition probability for a general generation model. Finally, based on the theoretical upper bounds, we propose a novel rebalanced encoding approach to alleviate the high inflow problem. The experimental results show that our theoretical framework is applicable in general generation models and our proposed rebalanced encoding approach alleviates the repetition problem significantly. The source code of this paper can be obtained from https://github.com/fuzihaofzh/repetition-problem-nlg.
翻译:文本生成任务, 包括翻译、 简化、 语言模型等, 近些年来可以看到快速增长 。 尽管取得了显著的成就, 几乎所有的文本生成模型都观察到了重复问题 。 尽管取得了显著的成就, 几乎所有的文本生成模型都发现了重复问题 。 为了解决重复问题, 已经提出了许多方法, 但是没有现有的理论分析来说明为什么出现这一问题以及如何解决这个问题。 在本文中, 我们为重复问题提出了一个新的理论分析框架 。 我们首先定义了平均重复性( ARP) 来量化重复问题。 然后, 我们对Markov 生成模型进行了广泛的分析, 并得出了平均重复性概率的几大范围 。 我们表明, 大部分现有方法基本上明确或隐含地将上层缩小 。 根据我们的理论, 我们表明, 重复性问题是由我们语言本身的特性造成的。 一个主要原因是, 有太多的词来预测同一词与随后的单词具有很高的概率 。 因此, 我们很容易回到该词, 并形成可应用的平均重复性概率的几大范围重复概率 。 我们提出一个高层次的模型 复制 。 最后, 我们提出, 复制 复制 复制 复制 复制 的 复制 复制 复制 。 我们的 复制 复制 复制 的 的 复制 的 复制 的 的 的 。 复制 复制 的 的 。 复制 复制 复制 的 的 。 。 。 。 复制 。