Recently, a large number of tuning strategies have been proposed to adapt pre-trained language models to downstream tasks. In this paper, we perform an extensive empirical evaluation of various tuning strategies for multilingual learning, particularly in the context of text summarization. Specifically, we explore the relative advantages of three families of multilingual tuning strategies (a total of five models) and empirically evaluate them for summarization over 45 languages. Experimentally, we not only established a new state-of-the-art on the XL-Sum dataset but also derive a series of observations that hopefully can provide hints for future research on the design of multilingual tuning strategies.
翻译:最近,为适应下游任务,提出了大量调整战略,以适应经过培训的语文模式;在本文件中,我们对多种语文学习的各种调整战略进行了广泛的实证评估,特别是在文本总结方面;具体地说,我们探讨了多种语文调整战略的三个组合(共五个模式)的相对优势,并用经验评估了这些组合的45种语言的总结;实验性地说,我们不仅在XL-Sum数据集方面建立了新的最新工艺,而且还得出了一系列意见,希望它们能够为今后设计多语文调整战略的研究提供提示。