Automatic processing of language is becoming pervasive in our lives, often taking central roles in our decision making, like choosing the wording for our messages and mails, translating our readings, or even having full conversations with us. Word embeddings are a key component of modern natural language processing systems. They provide a representation of words that has boosted the performance of many applications, working as a semblance of meaning. Word embeddings seem to capture a semblance of the meaning of words from raw text, but, at the same time, they also distill stereotypes and societal biases which are subsequently relayed to the final applications. Such biases can be discriminatory. It is very important to detect and mitigate those biases, to prevent discriminatory behaviors of automated processes, which can be much more harmful than in the case of humans because their of their scale. There are currently many tools and techniques to detect and mitigate biases in word embeddings, but they present many barriers for the engagement of people without technical skills. As it happens, most of the experts in bias, either social scientists or people with deep knowledge of the context where bias is harmful, do not have such skills, and they cannot engage in the processes of bias detection because of the technical barriers. We have studied the barriers in existing tools and have explored their possibilities and limitations with different kinds of users. With this exploration, we propose to develop a tool that is specially aimed to lower the technical barriers and provide the exploration power to address the requirements of experts, scientists and people in general who are willing to audit these technologies.
翻译:语言的自动处理正在我们生活中变得十分普遍,常常在决策中扮演核心角色,例如选择我们的信息和邮件的措辞,翻译我们的阅读,甚至与我们进行充分对话。文字嵌入是现代自然语言处理系统的一个关键组成部分。它们代表了能够提高许多应用性能的文字,作为一种含义的外观。语言嵌入似乎反映了原始文字中文字含义的外观,但与此同时,它们也淡化了陈规定型观念和社会偏见,这些观念和偏见随后被传递到最终应用中。这种偏见可能是歧视性的。发现和减少这些偏见非常重要,以防止自动化过程的歧视性行为,因为自动化过程由于其规模,其危害程度可能比人类要大得多。目前有许多工具和技术来探测和减轻语言嵌入中的偏见,但是它们给没有技术技能的人的参与带来了许多障碍。由于大多数偏见的专家,无论是社会科学家还是深知偏见的环境下的人,这些偏见可能是歧视性的。这些偏见可能是歧视性的。这些偏见可能具有歧视性的。发现和减少这些偏见是十分重要的,发现和减少这些偏见的偏向这些偏见的偏向,因为我们在探索过程中无法利用这些技术方面的障碍。