This paper demonstrates a task to finetune a BART model so it can construct a sentence from an arbitrary set of words, which used to be a difficult NLP task. The training task is making sentences with four words, but the trained model can generate sentences when fewer or more words are provided. The output sentences have high quality in general. The model can have some real-world applications, and this task can be used as an evaluation mechanism for any language model as well.
翻译:本文展示了对BART模型进行微调的任务,这样它就可以从一套武断的词组中构建一个句子,这曾经是一个困难的NLP任务。 培训任务是用四个字来做判决, 但是当提供较少或更多的字组时, 受过训练的模型可以产生句子。 输出句子总的来说质量很高。 该模型可以有一些真实的应用, 这项任务也可以用作任何语言模型的评估机制 。