As machine learning methods are deployed in real-world settings such as healthcare, legal systems, and social science, it is crucial to recognize how they shape social biases and stereotypes in these sensitive decision-making processes. Among such real-world deployments are large-scale pretrained language models (LMs) that can be potentially dangerous in manifesting undesirable representational biases - harmful biases resulting from stereotyping that propagate negative generalizations involving gender, race, religion, and other social constructs. As a step towards improving the fairness of LMs, we carefully define several sources of representational biases before proposing new benchmarks and metrics to measure them. With these tools, we propose steps towards mitigating social biases during text generation. Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information for high-fidelity text generation, thereby pushing forward the performance-fairness Pareto frontier.
翻译:由于在保健、法律制度和社会科学等现实世界环境中采用机器学习方法,至关重要的是要认识到它们如何在这些敏感的决策过程中形成社会偏见和陈规定型观念,在现实世界部署的大规模预先培训的语言模式中,这些模式在表现不受欢迎的代表性偏见方面可能具有潜在危险——由于陈规定型在性别、种族、宗教和其他社会结构方面传播负面的泛泛化而形成的有害偏见。作为提高低质量标准公平的一个步骤,我们在提出新的基准和衡量标准之前,仔细界定若干代表偏见的来源。我们用这些工具提出在文本生成过程中减少社会偏见的步骤。我们的经验结果和人类评价表明,在减少偏见的同时,保留重要背景信息,以产生高忠诚文本,从而推进业绩公平Pareto边界。