Fairness-aware learning aims at constructing classifiers that not only make accurate predictions, but also do not discriminate against specific groups. It is a fast-growing area of machine learning with far-reaching societal impact. However, existing fair learning methods are vulnerable to accidental or malicious artifacts in the training data, which can cause them to unknowingly produce unfair classifiers. In this work we address the problem of fair learning from unreliable training data in the robust multisource setting, where the available training data comes from multiple sources, a fraction of which might not be representative of the true data distribution. We introduce FLEA, a filtering-based algorithm that identifies and suppresses those data sources that would have a negative impact on fairness or accuracy if they were used for training. As such, FLEA is not a replacement of prior fairness-aware learning methods but rather an augmentation that makes any of them robust against unreliable training data. We show the effectiveness of our approach by a diverse range of experiments on multiple datasets. Additionally, we prove formally that -- given enough data -- FLEA protects the learner against corruptions as long as the fraction of affected data sources is less than half. Our source code and documentation are available at https://github.com/ISTAustria-CVML/FLEA.
翻译:公平认识的学习旨在构建不仅作出准确预测,而且不歧视特定群体的分类,这是一个快速增长的机器学习领域,具有深远的社会影响;然而,现有的公平学习方法容易在培训数据中出现意外或恶意的文物,从而导致他们不知情地产生不公平的分类者;在这项工作中,我们处理从强有力的多来源环境中不可靠的培训数据中公平学习的问题,因为现有培训数据来自多个来源,其中一小部分可能无法代表真实的数据分配。我们引入了基于过滤的算法,即查明和压制那些如果用于培训会对公平或准确性产生消极影响的数据源。因此,公平学习方法不是取代先前的公平意识学习方法,而是扩大方法,使之在不可靠的培训数据方面变得强大。我们通过在多个数据集上进行多种多样的实验来展示我们的方法的有效性。此外,我们正式证明,只要有足够的数据,FLEA将保护学习者免受腐败,只要受影响的数据源的分数为MALM/MLA。我们的源码和AFLA/MLA/MOLS/MOLs。