Text generation tasks, including translation, summarization, language models, and etc. see rapid growth during recent years. Despite the remarkable achievements, the repetition problem has been observed in nearly all text generation models undermining the generation performance extensively. To solve the repetition problem, many methods have been proposed, but there is no existing theoretical analysis to show why this problem happens and how it is resolved. In this paper, we propose a new framework for theoretical analysis for the repetition problem. We first define the Average Repetition Probability (ARP) to characterize the repetition problem quantitatively. Then, we conduct an extensive analysis of the Markov generation model and derive several upper bounds of the average repetition probability with intuitive understanding. We show that most of the existing methods are essentially minimizing the upper bounds explicitly or implicitly. Grounded on our theory, we show that the repetition problem is, unfortunately, caused by the traits of our language itself. One major reason is attributed to the fact that there exist too many words predicting the same word as the subsequent word with high probability. Consequently, it is easy to go back to that word and form repetitions and we dub it as the high inflow problem. Furthermore, we derive a concentration bound of the average repetition probability for a general generation model. Finally, based on the theoretical upper bounds, we propose a novel rebalanced encoding approach to alleviate the high inflow problem. The experimental results show that our theoretical framework is applicable in general generation models and our proposed rebalanced encoding approach alleviates the repetition problem significantly. The source code of this paper can be obtained from \url{https://github.com/fuzihaofzh/repetition-problem-nlg}.
翻译:文本生成任务, 包括翻译、 统称、 语言模型等等, 近些年来可以看到快速增长 。 尽管取得了显著的成就, 几乎所有的文本生成模型都观察到了重复问题 。 为解决重复问题, 已经提出了许多方法, 但是没有现有的理论分析来显示为什么出现这一问题以及如何解决 。 在本文中, 我们为重复问题提出了一个新的理论分析框架 。 我们首先定义了平均重复率( ARP) 来量化重复问题 。 然后, 我们广泛分析了Markov 生成模型, 并得出了平均重复概率的几大范围, 直观理解。 我们表明, 大部分现有方法基本上明确或隐含地将上层缩小 。 根据我们的理论, 我们表明, 重复问题是由我们语言本身的特性造成的。 一个主要原因是, 太多的词来预测同一词与随后的重复性( ARP) 。 因此, 我们很容易回到该词, 并形成一些可应用的平均重复性范围 。 最后, 我们提出, 以高层次的复制率 来显示我们的总的复制率 。 。 最后, 我们提出, 我们的复制 复制性 复制 的复制 。 在 复制 复制 复制 复制 的 的 复制 。