Federated learning (FL) mechanisms typically require each client to transfer their weights to a central server, irrespective of how useful they are. In order to avoid wasteful data transfer costs from clients to the central server, we propose the use of consensus based protocols to identify a subset of clients with most useful model weights at each data transfer step. First, we explore the application of existing fluid democracy protocols to FL from a performance standpoint, comparing them with traditional one-person-one-vote (also known as 1p1v or FedAvg). We propose a new fluid democracy protocol named viscous-retained democracy that always does better than 1p1v under the same assumptions as existing fluid democracy protocols while also not allowing for influence accumulation. Secondly, we identify weaknesses of fluid democracy protocols from an adversarial lens in terms of their dependence on topology and/ or number of adversaries required to negatively impact the global model weights. To this effect, we propose an algorithm (FedVRD) that dynamically limits the effect of adversaries while minimizing cost by leveraging the delegation topology.
翻译:联邦学习(FL)机制通常要求每个客户端将其权重传输至中央服务器,无论这些权重的效用如何。为避免从客户端到中央服务器的冗余数据传输成本,我们提出采用基于共识的协议,在每次数据传输步骤中识别出具有最有效模型权重的客户端子集。首先,我们从性能角度探讨现有流动民主协议在联邦学习中的应用,并将其与传统的一人一票(亦称1p1v或FedAvg)机制进行比较。我们提出一种名为粘性保留民主的新型流动民主协议,该协议在与现有流动民主协议相同的假设条件下始终优于1p1v机制,同时防止影响力累积。其次,我们从对抗性视角出发,针对流动民主协议对网络拓扑的依赖性及影响全局模型权重所需对抗者数量等方面,识别其潜在弱点。为此,我们提出一种算法(FedVRD),该算法通过利用委托拓扑动态限制对抗者的影响,同时实现成本最小化。