Large pre-trained language models are capable of generating varied and fluent texts. Starting from the prompt, these models generate a narrative that can develop unpredictably. The existing methods of controllable text generation, which guide the narrative in the text in the user-specified direction, require creating a training corpus and an additional time-consuming training procedure. The paper proposes and investigates Collocation2Text, a plug-and-play method for automatic controllable text generation in Russian, which does not require fine-tuning. The method is based on two interacting models: the autoregressive language ruGPT-3 model and the autoencoding language ruRoBERTa model. The idea of the method is to shift the output distribution of the autoregressive model according to the output distribution of the autoencoding model in order to ensure a coherent transition of the narrative in the text towards the guide phrase, which can contain single words or collocations. The autoencoding model, which is able to take into account the left and right contexts of the token, "tells" the autoregressive model which tokens are the most and least logical at the current generation step, increasing or decreasing the probabilities of the corresponding tokens. The experiments on generating news articles using the proposed method showed its effectiveness for automatically generated fluent texts which contain coherent transitions between user-specified phrases.
翻译:经过培训的大型语言模型能够产生多种流畅的文本。 从快速开始,这些模型产生一个无法预测的描述。 现有的可控文本生成方法指导用户指定方向的文本说明, 需要创建一个培训文稿和额外的耗时培训程序。 本文提议并调查Collovel2Text, 即自动可控俄语文本生成的插接和播放方法, 不需要微调。 该方法基于两个互动模型: 自动递增语言 ruGPT-3 模型和自动编码语言 ruBERTa 模型。 该方法的构想是, 将自动递增模式的输出分配与自动编码模型的输出分配相适应, 以确保文本的叙述向指南短语的连贯过渡, 它可以包含单词或合调。 自动编码模型能够考虑到标记的左侧和右侧环境, “ Tells” 自动递增语言语言模型 和自动调解析语言 。 该方法的目的是根据自动递增版本的用户版本, 以最符合逻辑的方式, 显示当前生成的版本的顺序, 将自动递增和最稳定的版本。