Ensuring fairness of machine learning (ML) algorithms is becoming an increasingly important mission for ML service providers. This is even more critical and challenging in the federated learning (FL) scenario, given a large number of diverse participating clients. Simply mandating equality across clients could lead to many undesirable consequences, potentially discouraging high-performing clients and resulting in sub-optimal overall performance. In order to achieve better equity rather than equality, in this work, we introduce and study proportional fairness (PF) in FL, which has a deep connection with game theory. By viewing FL from a cooperative game perspective, where the players (clients) collaboratively learn a good model, we formulate PF as Nash bargaining solutions. Based on this concept, we propose PropFair, a novel and easy-to-implement algorithm for effectively finding PF solutions, and we prove its convergence properties. We illustrate through experiments that PropFair consistently improves the worst-case and the overall performances simultaneously over state-of-the-art fair FL algorithms for a wide array of vision and language datasets, thus achieving better equity.
翻译:确保机器学习(ML)算法的公平性正成为ML服务提供者越来越重要的任务。考虑到参与客户众多,这在联合学习(FL)的情景中更为关键和更具挑战性。简单地规定客户之间的平等可能导致许多不良后果,可能阻碍高绩效客户,导致总体业绩低于最佳水平。为了在这项工作中实现更好的公平而不是平等,我们在FL引入和研究比例公平性(PF),这与游戏理论有着深刻的联系。从合作游戏的角度看待FL,即参与者(客户)合作学习一个好模式,我们把PF作为Nash讨价还价解决方案。基于这一概念,我们提出了Propfair,这是为有效找到PFP解决方案而采用的新颖和易于执行的算法,我们证明了其趋同性。我们通过实验来说明Propfair不断改进最坏的情况和总体业绩,同时改进了最先进的FL算法,以获得广泛的视觉和语言数据集,从而实现更好的公平性。