Recent works have found evidence of gender bias in models of machine translation and coreference resolution using mostly synthetic diagnostic datasets. While these quantify bias in a controlled experiment, they often do so on a small scale and consist mostly of artificial, out-of-distribution sentences. In this work, we find grammatical patterns indicating stereotypical and non-stereotypical gender-role assignments (e.g., female nurses versus male dancers) in corpora from three domains, resulting in a first large-scale gender bias dataset of 108K diverse real-world English sentences. We manually verify the quality of our corpus and use it to evaluate gender bias in various coreference resolution and machine translation models. We find that all tested models tend to over-rely on gender stereotypes when presented with natural inputs, which may be especially harmful when deployed in commercial systems. Finally, we show that our dataset lends itself to finetuning a coreference resolution model, finding it mitigates bias on a held out set. Our dataset and models are publicly available at www.github.com/SLAB-NLP/BUG. We hope they will spur future research into gender bias evaluation mitigation techniques in realistic settings.
翻译:最近的工作发现,在主要使用合成诊断数据集的机器翻译和共同分辨率模型中,有证据表明存在性别偏见;虽然这些在受控制的实验中以数量表示偏见,但往往是小规模的,而且大多是人为的、分配外的句子;在这项工作中,我们发现,从三个领域来看,在公司内部,有陈规定型和非陈规定型的性别角色任务(如女护士对男舞女)的语法模式,这导致首次大规模性别偏见数据集,包括108K多样化的现实世界英语句子。我们人工核查了我们的资料的质量,并用它来评价各种共同参考分辨率和机器翻译模型中的性别偏见。我们发现,所有经过测试的模型在提出自然投入时,往往过分强调性别陈规定型观念,如果在商业系统中部署,则可能特别有害。最后,我们表明,我们的数据集有助于对共同参照分辨率模型进行微调,发现它减轻了在已公布内容上的偏见。我们的数据集和模型可在www.github.com/SLP/BUG上公开查阅。我们希望,这些模型将促进未来在现实的性别偏见评估环境中进行研究。