Masked language models (MLMs) are pre-trained with a denoising objective that is in a mismatch with the objective of downstream fine-tuning. We propose pragmatic masking and surrogate fine-tuning as two complementing strategies that exploit social cues to drive pre-trained representations toward a broad set of concepts useful for a wide class of social meaning tasks. We test our models on $15$ different Twitter datasets for social meaning detection. Our methods achieve $2.34\%$ $F_1$ over a competitive baseline, while outperforming domain-specific language models pre-trained on large datasets. Our methods also excel in few-shot learning: with only $5\%$ of training data (severely few-shot), our methods enable an impressive $68.54\%$ average $F_1$. The methods are also language agnostic, as we show in a zero-shot setting involving six datasets from three different languages.
翻译:蒙面语言模型(MLMS)经过预先培训,其取消目标与下游微调的目标不符。我们建议采用实用的掩码和代用微调作为两种补充战略,利用社会线索推动培训前的表述方式,形成对广泛社会意义任务有用的广泛概念。我们用15,000美元的不同推特数据集测试我们的模型,以探测社会意义。我们的方法在竞争基线上达到2.34 $1美元,而在大型数据集方面则超过绩效特定域语言模型的预先培训。我们的方法在少见的学习中也非常出色:只有5美元的培训数据(少见少见),我们的方法使培训数据(平均68.54美元)能够令人印象深刻地发挥作用。我们的方法也是语言的不可知性,我们在涉及来自三种不同语言的6个数据集的零点设置中显示了这一点。