Pretrained bidirectional Transformers, such as BERT, have achieved significant improvements in a wide variety of language understanding tasks, while it is not straightforward to directly apply them for natural language generation. In this paper, we present a sequence-to-sequence fine-tuning toolkit s2s-ft, which adopts pretrained Transformers for conditional generation tasks. Inspired by UniLM, we implement three sequence-to-sequence fine-tuning algorithms, namely, causal fine-tuning, masked fine-tuning, and pseudo-masked fine-tuning. By leveraging the existing pretrained bidirectional Transformers, experimental results show that s2s-ft achieves strong performance on several benchmarks of abstractive summarization, and question generation. Moreover, we demonstrate that the package s2s-ft supports both monolingual and multilingual NLG tasks. The s2s-ft toolkit is available at https://github.com/microsoft/unilm/tree/master/s2s-ft.
翻译:在UniLM的启发下,我们实施了三种从顺序到顺序的微调算法,即因果微调、蒙面微调和假冒的微调。通过利用现有的经过预先训练的双向变换器,实验结果显示S2-ft在抽象组合和问题生成的若干基准上取得了很强的业绩。此外,我们证明S2-ft包支持单语和多语言NLG任务。S2-ft工具包可在https://github.com/microsoft/unilm/tree/master/s2s-ft上查阅。