Text classifiers have promising applications in high-stake tasks such as resume screening and content moderation. These classifiers must be fair and avoid discriminatory decisions by being invariant to perturbations of sensitive attributes such as gender or ethnicity. However, there is a gap between human intuition about these perturbations and the formal similarity specifications capturing them. While existing research has started to address this gap, current methods are based on hardcoded word replacements, resulting in specifications with limited expressivity or ones that fail to fully align with human intuition (e.g., in cases of asymmetric counterfactuals). This work proposes novel methods for bridging this gap by discovering expressive and intuitive individual fairness specifications. We show how to leverage unsupervised style transfer and GPT-3's zero-shot capabilities to automatically generate expressive candidate pairs of semantically similar sentences that differ along sensitive attributes. We then validate the generated pairs via an extensive crowdsourcing study, which confirms that a lot of these pairs align with human intuition about fairness in the context of toxicity classification. Finally, we show how limited amounts of human feedback can be leveraged to learn a similarity specification that can be used to train downstream fairness-aware models.
翻译:文本分类者在恢复筛选和内容调适等高占用性任务中有很有希望的应用。 这些分类者必须公平,避免歧视性决定,不易干扰性别或族裔等敏感属性。 但是,人类对这些扰动的直觉与正式相似规格捕捉这些特征之间存在差距。 虽然现有的研究已经开始解决这一差距,但目前的方法是以硬编码的字替换为基础,导致规格的表达性有限,或不符合人类直觉(例如,不对称反事实)的规格。 这项工作提出了弥合这一差距的新办法,方法是发现表达式和直观的个人公平性规范。 我们展示如何利用不受监督的风格转移和GPT-3的零光速能力来自动产生与敏感属性不同的语义相似的表达式候选词组。 我们随后通过广泛的众包研究来验证生成的对配,这证实了这些对子在毒性分类方面与人类直觉相符的公平性。 最后,我们展示了如何利用有限的人类反馈模型来学习相似的直观性要求,从而在下游中学习使用。