Group fairness ensures that the outcome of machine learning (ML) based decision making systems are not biased towards a certain group of people defined by a sensitive attribute such as gender or ethnicity. Achieving group fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires using the sensitive attribute values of all clients, while FL is aimed precisely at protecting privacy by not giving access to the clients' data. As we show in this paper, this conflict between fairness and privacy in FL can be resolved by combining FL with Secure Multiparty Computation (MPC) and Differential Privacy (DP). In doing so, we propose a method for training group-fair ML models in cross-device FL under complete and formal privacy guarantees, without requiring the clients to disclose their sensitive attribute values.
翻译:团体公平确保基于机器学习的决策系统不会偏向性别或族裔等敏感属性界定的特定人群。在联邦学习中实现群体公平具有挑战性,因为减少偏见必然需要使用所有客户的敏感属性值,而FL的目的恰恰是保护隐私,不提供客户的数据。我们在本文件中表明,通过将FL与安全多方计算和差异隐私(DP)相结合,可以解决基于机器学习的决策系统公正和隐私之间的这种冲突。 为此,我们提出了一个方法,在完整和正式的隐私保障下,在交叉功能FL中培训小组公平 ML模式,但不要求客户披露其敏感的属性值。