In this paper, we study the problem of fair worker selection in Federated Learning systems, where fairness serves as an incentive mechanism that encourages more workers to participate in the federation. Considering the achieved training accuracy of the global model as the utility of the selected workers, which is typically a monotone submodular function, we formulate the worker selection problem as a new multi-round monotone submodular maximization problem with cardinality and fairness constraints. The objective is to maximize the time-average utility over multiple rounds subject to an additional fairness requirement that each worker must be selected for a certain fraction of time. While the traditional submodular maximization with a cardinality constraint is already a well-known NP-Hard problem, the fairness constraint in the multi-round setting adds an extra layer of difficulty. To address this novel challenge, we propose three algorithms: Fair Continuous Greedy (FairCG1 and FairCG2) and Fair Discrete Greedy (FairDG), all of which satisfy the fairness requirement whenever feasible. Moreover, we prove nontrivial lower bounds on the achieved time-average utility under FairCG1 and FairCG2. In addition, by giving a higher priority to fairness, FairDG ensures a stronger short-term fairness guarantee, which holds in every round. Finally, we perform extensive simulations to verify the effectiveness of the proposed algorithms in terms of the time-average utility and fairness satisfaction.
翻译:在本文中,我们研究了联邦学习系统中公平工人甄选问题,公平性是鼓励更多工人参加联邦的激励机制。考虑到全球模式的培训准确性已经达到,是选定工人的效用,通常是单调子模调功能,我们把工人甄选问题作为新的多轮单调子模调最大化问题,具有基本性和公平性限制。目标是尽可能扩大多轮工的时间平均效用,但需有额外公平性要求,即每个工人都必须在一定时间段内选择。传统的次级模式优化,但受主要限制,已经是一个众所周知的NP-Hard问题,而多轮环境下的公平性制约又增加了额外的困难层。为了应对这一新的挑战,我们提出了三种算法:公平持续贪婪(FairCG1和FairCG2)和公平性贪婪(Fair decrete GEDG),所有这些都尽可能满足公平性要求。此外,我们证明,在公平性G1和公平性最终确保公平性最终的稳定性方面,我们最优先性地保证公平性。