Federated learning (FL), as a distributed machine learning paradigm, promotes personal privacy by clients' processing raw data locally. However, relying on a centralized server for model aggregation, standard FL is vulnerable to server malfunctions, untrustworthy server, and external attacks. To address this issue, we propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL). In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round. We evaluate the learning performance of BLADE-FL, and develop an upper bound on the global loss function. Then we verify that this bound is convex with respect to the number of overall rounds K, and optimize the computing resource allocation for minimizing the upper bound. We also note that there is a critical problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to disguise their cheating behaviors. Focusing on this problem, we explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients. Based on the MNIST and Fashion-MNIST datasets, we show that the experimental results are consistent with the analytical ones. To be specific, the gap between the developed upper bound and experimental results is lower than 5%, and the optimized K based on the upper bound can effectively minimize the loss function.
翻译:联邦学习(FL)是一个分布式的机器学习模式,它通过客户在当地处理原始数据,促进个人隐私。然而,依靠中央服务器进行模型汇总,标准FL很容易受到服务器故障、不可信服务器和外部攻击的影响。为了解决这个问题,我们提议一个分散式FL框架,将块链纳入FL,即块链辅助分散式联合会式学习(BLADE-FL),优化计算资源分配以尽量减少上限。在一轮拟议的BLADE-FL中,每个客户向其他客户播放其经过培训的模式,竞相根据收到的模型生成一个块块,然后在下一轮本地培训之前将生成的模型集成起来。我们评价BLADE-FL的学习性能,并开发全球损失函数的上限。然后,我们核实这一界限与总体K轮的数是连锁联,并优化计算资源分配以尽量减少上限。我们还注意到,由于懒惰的客户将他人所训练的模式植入的模型和增加人工噪音以掩盖其欺骗行为。我们专注于了BADE的高级分析性,我们学习了KFL的高级客户的升级和最佳业绩,我们了解了KFL的深度分析基础。我们了解了KL的升级和最优级客户之间的结果。