Large pretrained language models have recently conquered the area of natural language processing. As an alternative to predominant masked language modelling introduced in BERT, the T5 model has introduced a more general training objective, namely sequence to sequence transformation, which includes masked language model but more naturally fits text generation tasks such as machine translation, summarization, open-domain question answering, text simplification, dialogue systems, etc. The monolingual variants of T5 models have been limited to well-resourced languages, while the massively multilingual T5 model supports 101 languages. In contrast, we trained two different sized T5-type sequence to sequence models for morphologically rich Slovene language with much less resources and analyzed their behavior. Concerning classification tasks, the SloT5 models mostly lag behind the monolingual Slovene SloBERTa model but are to be considered for the generative tasks.
翻译:最近,大型预先培训的语言模型征服了自然语言处理领域。作为BERT引入的主要隐蔽语言模型的替代方法,T5模型引入了一个更一般性的培训目标,即序列转换的顺序,其中包括隐形语言模型,但更自然地适合生成文本的任务,如机器翻译、概括化、开放式问题回答、文本简化、对话系统等。T5模型的单语变种仅限于资源丰富的语言,而大量多语种T5模型支持101种语言。相比之下,我们用更少的资源培训了两种规模不同的T5型序列,用于形态上丰富的斯洛文尼亚语序列模型,并分析其行为。关于分类任务,SLOT5模型大多落后于单语斯洛文尼亚语模式,但用于基因化任务时需要考虑。