Masked Language Modeling (MLM) has been widely used as the denoising objective in pre-training language models (PrLMs). Existing PrLMs commonly adopt a Random-Token Masking strategy where a fixed masking ratio is applied and different contents are masked by an equal probability throughout the entire training. However, the model may receive complicated impact from pre-training status, which changes accordingly as training time goes on. In this paper, we show that such time-invariant MLM settings on masking ratio and masked content are unlikely to deliver an optimal outcome, which motivates us to explore the influence of time-variant MLM settings. We propose two scheduled masking approaches that adaptively tune the masking ratio and masked content in different training stages, which improves the pre-training efficiency and effectiveness verified on the downstream tasks. Our work is a pioneer study on time-variant masking strategy on ratio and content and gives a better understanding of how masking ratio and masked content influence the MLM pre-training.
翻译:在培训前语言模型(PrLM)中,隐蔽语言模型(MLM)被广泛用作解密目标。现有的PrLM通常采用随机式掩码策略,采用固定遮罩比例,在整个培训过程中以同等概率掩盖不同内容。不过,该模型可能受到培训前状态的复杂影响,随着培训时间的继续而相应变化。在本文中,我们表明,这种隐蔽比例和隐蔽内容的时差式MLM设置不太可能产生最佳效果,这促使我们探索时间-变异性 MLM设置的影响。我们建议了两种安排式遮罩方法,在不同的培训阶段适应性调整遮罩比例和隐蔽内容,以提高培训前的效率和效力,对下游任务进行核实。我们的工作是对时间-变隐蔽比例和内容战略的先驱研究,并更好地了解蒙蔽比例和隐蔽内容如何影响MLM预先培训。</s>