As machine learning becomes increasingly incorporated in crucial decision-making scenarios such as healthcare, recruitment, and loan assessment, there have been increasing concerns about the privacy and fairness of such systems. Federated learning has been viewed as a promising solution for collaboratively learning machine learning models among multiple parties while maintaining the privacy of their local data. However, federated learning also poses new challenges in mitigating the potential bias against certain populations (e.g., demographic groups), which typically requires centralized access to the sensitive information (e.g., race, gender) of each data point. Motivated by the importance and challenges of group fairness in federated learning, in this work, we propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method, aiming to provide fair model performance across different sensitive groups (e.g., racial, gender groups) while maintaining high utility. The formulation can potentially provide more flexibility in the customized local debiasing strategies for each client. When running federated training on two widely investigated fairness datasets, Adult and COMPAS, our proposed method outperforms the state-of-the-art fair federated learning frameworks under a high heterogeneous sensitive attribute distribution.
翻译:随着机器学习越来越多地被纳入保健、招聘和贷款评估等关键决策情景,人们日益关注这类系统的隐私和公平性,联邦学习被视为多方合作学习机器学习模式的有希望的解决办法,同时维护其当地数据的隐私,然而,联邦学习也带来了新的挑战,有助于减少对某些群体(例如人口群体)的潜在偏见,因为这些群体通常需要集中获取每个数据点的敏感信息(例如种族、性别),在这项工作中,由于在联合学习中群体公平的重要性和挑战,我们提议采用FairFed这一新的算法,通过公平意识汇总方法提高群体公平性,目的是在不同敏感群体(例如种族、性别群体)中提供公平的示范性业绩,同时保持高功用。这种提法有可能为每个客户提供定制的本地消除偏见战略的更大灵活性。在对两个广泛调查的公平数据集(成人和COMPAS)进行联邦化培训时,我们提出的方法超出了在高标准、高水平的敏感质化学习框架下进行的州级公平、敏感化分配。