As language models are increasingly included in human-facing machine learning tools, bias against demographic subgroups has gained attention. We propose FineDeb, a two-phase debiasing framework for language models that starts with contextual debiasing of embeddings learned by pretrained language models. The model is then fine-tuned on a language modeling objective. Our results show that FineDeb offers stronger debiasing in comparison to other methods which often result in models as biased as the original language model. Our framework is generalizable for demographics with multiple classes, and we demonstrate its effectiveness through extensive experiments and comparisons with state of the art techniques. We release our code and data on GitHub.
翻译:由于语言模型越来越多地被纳入人造机器学习工具,对人口分组的偏见引起了人们的注意。我们提议FineDeb,这是语言模型的两阶段分化框架,从背景上贬低通过预先培训的语言模型学习的嵌入点开始。然后,该模型根据语言模型的目标进行微调。我们的结果表明,与通常导致模型与原始语言模型一样有偏向的其他方法相比,FineDeb提供了更强烈的偏向性。我们的框架适用于多类人口,我们通过广泛的实验和与最新技术的比较来展示其有效性。我们发布了关于GitHub的代码和数据。