Generating text with autoregressive language models (LMs) is of great importance to many natural language processing (NLP) applications. Previous solutions for this task often produce text that contains degenerative expressions or lacks semantic consistency. Recently, Su et al. introduced a new decoding method, contrastive search, based on the isotropic representation space of the language model and obtained new state of the art on various benchmarks. Additionally, Su et al. argued that the representations of autoregressive LMs (e.g. GPT-2) are intrinsically anisotropic which is also shared by previous studies. Therefore, to ensure the language model follows an isotropic distribution, Su et al. proposed a contrastive learning scheme, SimCTG, which calibrates the language model's representations through additional training. In this study, we first answer the question: "Are autoregressive LMs really anisotropic?". To this end, we extensively evaluate the isotropy of LMs across 16 major languages. Surprisingly, we find that the anisotropic problem only exists in the two specific English GPT-2-small/medium models. On the other hand, all other evaluated LMs are naturally isotropic which is in contrast to the conclusion drawn by previous studies. Based on our findings, we further assess the contrastive search decoding method using off-the-shelf LMs on four generation tasks across 16 languages. Our experimental results demonstrate that contrastive search significantly outperforms previous decoding methods without any additional training. More notably, on 12 out of the 16 evaluated languages, contrastive search performs comparably with human-level performances as judged by human evaluations. Our code and other related resources are publicly available at https://github.com/yxuansu/Contrastive_Search_Is_What_You_Need.
翻译:以自动递增语言模型(LMS) 生成文本对于许多自然语言处理(NLP)应用程序非常重要。 先前的任务解决方案通常产生含有退化表达式或缺乏语义一致性的文本。 最近, Su等人等人根据语言模型的异向表达空间引入了新的解码方法, 并获得了各种基准上的新水平。 此外, Su等人认为, 自递性LMS( e.g. GPT-2) 的表达方式本质上是反动的, 以往的研究也分享了这种反向性能。 因此, 为确保语言模型遵循异向性表达式分布式分布式分布式分布式分布式分布式的文本。 最近, Su等人等人根据语言模型的异向搜索方式引入了一种新的解码方法。 我们首先回答一个问题: “ 自动递增LMSDS( 真正反向异向性) ” 。 我们从16种主要语言的LMSMS/ 进一步评估LMS(LMs) 的偏向性变化。 我们发现, 亚性研究中的反向性分析问题仅存在于两种特定的实验性研究方式 。