Existing work on fairness modeling commonly assumes that sensitive attributes for all instances are fully available, which may not be true in many real-world applications due to the high cost of acquiring sensitive information. When sensitive attributes are not disclosed or available, it is needed to manually annotate a small part of the training data to mitigate bias. However, the skewed distribution across different sensitive groups preserves the skewness of the original dataset in the annotated subset, which leads to non-optimal bias mitigation. To tackle this challenge, we propose Active Penalization Of Discrimination (APOD), an interactive framework to guide the limited annotations towards maximally eliminating the effect of algorithmic bias. The proposed APOD integrates discrimination penalization with active instance selection to efficiently utilize the limited annotation budget, and it is theoretically proved to be capable of bounding the algorithmic bias. According to the evaluation on five benchmark datasets, APOD outperforms the state-of-the-arts baseline methods under the limited annotation budget, and shows comparable performance to fully annotated bias mitigation, which demonstrates that APOD could benefit real-world applications when sensitive information is limited.
翻译:有关公平模型的现有工作通常假定,所有情况的敏感属性都完全可用,而由于获取敏感信息的成本高昂,许多现实应用中的情况可能并非如此。当敏感属性得不到披露或提供时,需要手动说明培训数据中的一小部分以减少偏差。然而,不同敏感群体之间的偏差分布保留了附加说明子集中原始数据集的偏差,导致不尽人意的偏差减轻。为了应对这一挑战,我们提议对歧视进行积极处罚(APOD),这是一个互动框架,用以指导有限的说明最大限度地消除算法偏差的影响。拟议的APOD将歧视处罚与主动选择实例结合起来,以便有效利用有限的注解预算,理论上证明它能够约束算法偏差。根据对五个基准数据集的评价,APOD在有限的注解预算下超越了国家基准方法的偏差,并显示可与充分说明的减少偏差的相似性表现,这表明APOD在敏感信息有限的情况下可以使实际应用受益。</s>