Masked Language Modeling (MLM) has been widely used as the denoising objective in pre-training language models (PrLMs). Existing PrLMs commonly adopt a random-token masking strategy where a fixed masking ratio is applied and different contents are masked by an equal probability throughout the entire training. However, the model may receive complicated impact from pre-training status, which changes accordingly as training time goes on. In this paper, we show that such time-invariant MLM settings on masking ratio and masked content are unlikely to deliver an optimal outcome, which motivates us to explore the influence of time-variant MLM settings. We propose two scheduled masking approaches that adaptively tune the masking ratio and contents in different training stages, which improves the pre-training efficiency and effectiveness verified on the downstream tasks. Our work is a pioneer study on time-variant masking strategy on ratio and contents and gives a better understanding of how masking ratio and masked content influence the MLM pre-training.
翻译:在培训前语言模型(PrLM)中,遮蔽语言模型(MLM)被广泛用作解密目标。现有的PrLMS通常采用随机式遮掩策略,采用固定遮掩比例,在整个培训过程中以同等的概率掩盖不同内容。然而,该模型可能受到培训前状态的复杂影响,随着培训时间的继续而相应变化。在本文中,我们表明,这种遮掩比例和遮蔽内容的时差式MLM设置不大可能产生最佳效果,这促使我们探索时差MLM设置的影响。我们提出了两种安排式遮掩方法,在不同的培训阶段适应性调整遮掩比例和内容,提高培训前的效率和效力,在下游任务中加以核实。我们的工作是对关于掩蔽比例和内容的时差式遮掩掩战略的先驱研究,并更好地了解遮蔽比例和遮蔽内容如何影响MLM预先培训。