Federated learning (FL) is a promising technical support to the vision of ubiquitous artificial intelligence in the sixth generation (6G) wireless communication network. However, traditional FL heavily relies on a trusted centralized server. Besides, FL is vulnerable to poisoning attacks, and the global aggregation of model updates makes the private training data under the risk of being reconstructed. What's more, FL suffers from efficiency problem due to heavy communication cost. Although decentralized FL eliminates the problem of the central dependence of traditional FL, it makes other problems more serious. In this paper, we propose BlockDFL, an efficient fully peer-to-peer (P2P) framework for decentralized FL. It integrates gradient compression and our designed voting mechanism with blockchain to efficiently coordinate multiple peer participants without mutual trust to carry out decentralized FL, while preventing data from being reconstructed according to transmitted model updates. Extensive experiments conducted on two real-world datasets exhibit that BlockDFL obtains competitive accuracy compared to centralized FL and can defend against poisoning attacks while achieving efficiency and scalability. Especially when the proportion of malicious participants is as high as 40 percent, BlockDFL can still preserve the accuracy of FL, which outperforms existing fully decentralized FL frameworks.
翻译:联邦学习(FL)是对第六代(6G)无线通信网络中无处不在的人工智能愿景的有希望的技术支持。然而,传统FL严重依赖信任的中央服务器。此外,FL容易受中毒袭击,而模型更新的全球汇总使得私人培训数据有可能被重建。此外,FL由于通信成本高昂而面临效率问题。虽然分散的FL消除了传统FL的核心依赖性问题,但使其他问题更为严重。我们在本文件中提议,为分散的FL建立一个高效的同侪对等框架。它整合了梯度压缩和我们设计的投票机制,与块链一起有效协调多个同侪参与者,而没有相互信任来实施分散的FL,同时防止根据传输的更新而重建数据。在两个真实世界数据集上进行的广泛实验表明,BlockDFL与集中的FL相比,具有竞争性的准确性,能够在达到效率和可承受性的同时抵御中毒袭击。特别是当恶意参与者的比例高达40%的FDFL框架,而FBL仍然能够完全保持其准确性时。