Federated learning has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edge devices while keeping private user data strictly on device. In this work, motivated from ensuring fairness among users and robustness against malicious adversaries, we formulate federated learning as multi-objective optimization and propose a new algorithm FedMGDA+ that is guaranteed to converge to Pareto stationary solutions. FedMGDA+ is simple to implement, has fewer hyperparameters to tune, and refrains from sacrificing the performance of any participating user. We establish the convergence properties of FedMGDA+ and point out its connections to existing approaches. Extensive experiments on a variety of datasets confirm that FedMGDA+ compares favorably against state-of-the-art.
翻译:联邦学习已成为一种充满希望和大规模分布的方式,用于对大量边缘设备进行联合深层模型培训,同时严格保留私人用户的数据。 在这项工作中,我们从确保用户之间的公平性和对恶意对手的稳健性出发,将联合学习作为多目标优化,并提出新的算法FedMGDA+,保证与Pareto固定解决方案趋同。FedMGDA+简单易行,可调和的超参数较少,避免牺牲任何参与用户的性能。我们建立了FedMGDA+的趋同特性,并指出其与现有方法的联系。关于各种数据集的广泛实验证实,FedMGDA+优于最新技术。