Federated learning (FL), as a distributed machine learning paradigm, promotes personal privacy by local data processing at each client. However, relying on a centralized server for model aggregation, standard FL is vulnerable to server malfunctions, untrustworthy server, and external attacks. To address this issue, we propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL). In a round of the proposed BLADE-FL, each client broadcasts the trained model to other clients, aggregates its own model with received ones, and then competes to generate a block before its local training of the next round. We evaluate the learning performance of BLADE-FL, and develop an upper bound on the global loss function. Then we verify that this bound is convex with respect to the number of overall aggregation rounds K, and optimize the computing resource allocation for minimizing the upper bound. We also note that there is a critical problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to disguise their cheating behaviors. Focusing on this problem, we explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients. Based on MNIST and Fashion-MNIST datasets, we show that the experimental results are consistent with the analytical ones. To be specific, the gap between the developed upper bound and experimental results is lower than 5%, and the optimized K based on the upper bound can effectively minimize the loss function.
翻译:联邦学习(FL)是一个分布式的机器学习模式,它通过每个客户的当地数据处理促进个人隐私。然而,依靠中央服务器进行模型汇总,标准的FL很容易受到服务器故障、不可信服务器和外部攻击的影响。为了解决这个问题,我们提议一个分散式FL框架,将块链连接到FL,即块链协助分散化的联合会式学习(BLADE-FL),在拟议的BLADE-FL中,每个客户向其他客户播放经过培训的模式,用收到的模型集成其自己的模型,然后在下一轮当地培训之前竞争产生一个块。我们评价BLADE-FL的高级约束性学习绩效,并开发全球损失函数的上限。然后,我们核实这一链条与总集轮K(BLADE-FL)数目的连接,优化资源分配以尽量减少上限。我们还注意到,由于懒惰的客户将他人经过培训的模型固定化的模型和增加人工噪音以掩盖其作弊行为,因此存在严重的培训问题。我们从高端到高端客户的实验性KLAFAFL的模型和最优性能的模型,我们从实验客户学习了B-LADADFL的模型和最优的成绩,我们学习了B-B-B-LFLADL的实验性能的模型和最优性能的模型。