Masked Language Modeling (MLM) is widely used to pretrain language models. The standard random masking strategy in MLM causes the pre-trained language models (PLMs) to be biased toward high-frequency tokens. Representation learning of rare tokens is poor and PLMs have limited performance on downstream tasks. To alleviate this frequency bias issue, we propose two simple and effective Weighted Sampling strategies for masking tokens based on the token frequency and training loss. We apply these two strategies to BERT and obtain Weighted-Sampled BERT (WSBERT). Experiments on the Semantic Textual Similarity benchmark (STS) show that WSBERT significantly improves sentence embeddings over BERT. Combining WSBERT with calibration methods and prompt learning further improves sentence embeddings. We also investigate fine-tuning WSBERT on the GLUE benchmark and show that Weighted Sampling also improves the transfer learning capability of the backbone PLM. We further analyze and provide insights into how WSBERT improves token embeddings.
翻译:隐蔽语言模型(MLM)被广泛用于预设语言模型。 MLM的标准随机掩蔽策略导致预先培训的语言模型偏向于高频符号。 稀有象征物的代言学习不力, PLM在下游任务上的表现有限。 为了缓解这种频率偏差问题,我们提出了两种简单而有效的基于象征性频率和培训损失的遮蔽象征物的加权抽样策略。 我们对BERT应用了这两种策略,并获得了加权标注的BERT(WSBERT) 。 关于语义相似性基准的实验显示, WSBERT大大改进了在BERT上嵌入的句子。 将WSBERT与校准方法相结合,并迅速学习进一步改进了句子嵌入。 我们还调查GLUE基准上的WSBERT微调WSBERT还改进了主干线PLM的转移学习能力。我们进一步分析和深入了解WSBERT如何改进标志嵌入。</s>