Weak supervision overcomes the label bottleneck, enabling efficient development of training sets. Millions of models trained on such datasets have been deployed in the real world and interact with users on a daily basis. However, the techniques that make weak supervision attractive -- such as integrating any source of signal to estimate unknown labels -- also ensure that the pseudolabels it produces are highly biased. Surprisingly, given everyday use and the potential for increased bias, weak supervision has not been studied from the point of view of fairness. This work begins such a study. Our departure point is the observation that even when a fair model can be built from a dataset with access to ground-truth labels, the corresponding dataset labeled via weak supervision can be arbitrarily unfair. Fortunately, not all is lost: we propose and empirically validate a model for source unfairness in weak supervision, then introduce a simple counterfactual fairness-based technique that can mitigate these biases. Theoretically, we show that it is possible for our approach to simultaneously improve both accuracy and fairness metrics -- in contrast to standard fairness approaches that suffer from tradeoffs. Empirically, we show that our technique improves accuracy on weak supervision baselines by as much as 32% while reducing demographic parity gap by 82.5%.
翻译:弱监督数据克服了标签瓶颈,能够有效地构建训练集。数百万使用此类数据集训练的模型已经在现实世界中部署,并与用户每天进行交互。然而,使弱监督具有吸引力的技术——如整合任何信号源以估计未知标签——也保证了它产生的伪标签具有高度偏见。令人惊讶的是,鉴于其日常使用和增加的偏见可能性,尚未从公平性的角度研究弱监督。这项工作开始了这样一项研究。我们的出发点是观察到,即使可以从具有地面真实标签的数据集构建公平模型,通过弱监督标记的对应数据集也可能是任意不公平的。幸运的是,一切并没有丧失:我们提出并经验证了一种弱监督数据中来源不公平模型,然后介绍了一种简单的反事实公平技术,可以缓解这些偏见。理论上,我们表明了这种方法可以同时改善准确性和公平度量——相对于标准的公平方法在权衡上遭受的影响。在实践中,我们表明,我们的技术可以将弱监督基线的准确率提高多达32%,同时将群体间公平差距降低82.5%。