Text style can reveal sensitive attributes of the author (e.g. race or age) to the reader, which can, in turn, lead to privacy violations and bias in both human and algorithmic decisions based on text. For example, the style of writing in job applications might reveal protected attributes of the candidate which could lead to bias in hiring decisions, regardless of whether hiring decisions are made algorithmically or by humans. We propose a VAE-based framework that obfuscates stylistic features of human-generated text through style transfer by automatically re-writing the text itself. Our framework operationalizes the notion of obfuscated style in a flexible way that enables two distinct notions of obfuscated style: (1) a minimal notion that effectively intersects the various styles seen in training, and (2) a maximal notion that seeks to obfuscate by adding stylistic features of all sensitive attributes to text, in effect, computing a union of styles. Our style-obfuscation framework can be used for multiple purposes, however, we demonstrate its effectiveness in improving the fairness of downstream classifiers. We also conduct a comprehensive study on style pooling's effect on fluency, semantic consistency, and attribute removal from text, in two and three domain style obfuscation.
翻译:文本样式可以向读者揭示作者的敏感属性(如种族或年龄),这反过来又会导致基于文字的人类和算法决定中的隐私侵犯和偏见。例如,在工作申请中写作的风格可能揭示候选人受保护的属性,从而导致雇用决定中的偏见,而不论雇用决定是按逻辑作出的还是由人作出的。我们提出了一个基于VAE的框架,通过自动重新撰写文本本身,通过样式转换,模糊人生成文本的立体特征,从而自动重新撰写文本本身。我们的框架以灵活的方式运用模糊的风格概念,从而能够形成两种截然不同的模糊风格概念:(1) 一种将培训中看到的不同样式有效地相互交叉的最小概念,(2) 一种试图通过在文本中添加所有敏感属性的立体特征来混淆的最大概念。 我们的风格模糊框架可以被用于多种目的,但是,我们展示其在提高下游分类者的公正性方面的有效性。我们还在三个域风格的流利化和流利化效果上进行了一项综合性研究。