NLP systems use language models such as Masked Language Models (MLMs) that are pre-trained on large quantities of text such as Wikipedia create representations of language. BERT is a powerful and flexible general-purpose MLM system developed using unlabeled text. Pre-training on large quantities of text also has the potential to transparently embed the cultural and social biases found in the source text into the MLM system. This study aims to compare biases in general purpose and medical MLMs with the StereoSet bias assessment tool. The general purpose MLMs showed significant bias overall, with BERT scoring 57 and RoBERTa scoring 61. The category of gender bias is where the best performances were found, with 63 for BERT and 73 for RoBERTa. Performances for profession, race, and religion were similar to the overall bias scores for the general-purpose MLMs.Medical MLMs showed more bias in all categories than the general-purpose MLMs except for SciBERT, which showed a race bias score of 55, which was superior to the race bias score of 53 for BERT. More gender (Medical 54-58 vs. General 63-73) and religious (46-54 vs. 58) biases were found with medical MLMs. This evaluation of four medical MLMs for stereotyped assessments about race, gender, religion, and profession showed inferior performance to general-purpose MLMs. These medically focused MLMs differ considerably in training source data, which is likely the root cause of the differences in ratings for stereotyped biases from the StereoSet tool.
翻译:NLP系统使用语言模型,如在大量文字(如维基百科)上预先培训的隐蔽语言模型(MLM)等语言模型,这些模型对大量文字(如Wikipedia)具有代表性。BERT是一个强大和灵活的通用MLM系统,使用未贴标签的文本开发了一个强大和灵活的通用MLM系统。大量文字预培训还有可能将源文本中发现的文化和社会偏见透明地纳入MLM系统。这项研究的目的是将一般目的和医疗MLMS的偏差与SciBER的偏差进行比较。一般目的MLMMS的偏差为55,比BER的种族偏差为57分和RoBERTA的评分为61。性别偏差是性别偏差,BERT和ROBERTTTA的偏差为63和73。职业、种族和宗教表现相似于一般目的MLMS的偏差(在医学、医学-LMS的4和MMMS的性别分析中显示为第558-LMS的偏差,在一般医学和医学-LMMMS的性别分析中显示出了4的偏差数据)。