Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words. Previous works have shown that incorporating span-level information over consecutive words in pre-training could further improve the performance of PrLMs. However, given that span-level clues are introduced and fixed in pre-training, previous methods are time-consuming and lack of flexibility. To alleviate the inconvenience, this paper presents a novel span fine-tuning method for PrLMs, which facilitates the span setting to be adaptively determined by specific downstream tasks during the fine-tuning phase. In detail, any sentences processed by the PrLM will be segmented into multiple spans according to a pre-sampled dictionary. Then the segmentation information will be sent through a hierarchical CNN module together with the representation outputs of the PrLM and ultimately generate a span-enhanced representation. Experiments on GLUE benchmark show that the proposed span fine-tuning method significantly enhances the PrLM, and at the same time, offer more flexibility in an efficient way.
翻译:培训前语言模型(PrLM)在培训内容非常大、词汇数以百万计的词组时,必须谨慎管理输入单位。以前的工作表明,在培训前将跨层信息纳入连续的单词中可以进一步改善PrLM的性能。然而,鉴于在培训前采用和固定跨层线索,以前的方法耗时且缺乏灵活性。为缓解不便,本文件为PrLM提供了一种新的跨层微调方法,便于在微调阶段根据具体的下游任务调整时间设置。详细来说,由PrLM处理的任何句子将根据预版字典分成多段。随后,分割信息将连同PrLM的演示输出通过CNNC模块发送,最终产生一个跨层代表。关于GLUE基准的实验表明,拟议的宽度微调方法将大大增强PrLM,并同时以有效的方式提供更大的灵活性。