Blockchain assisted federated learning (BFL) has been widely studied as a promising technology to process data at the network edge in a distributed manner. However, the study of BFL involves key challenges, including resource allocation and client scheduling. In this paper, we propose a BFL framework consisting of multiple clients, where the roles of clients include local model training, wireless uploading, and block mining in each round. First, we develop a renewal BFL framework to study the long-term system performance under time-varying fading channels. Second, in order to speed up the BFL process with limited communication, computation and energy resources, we propose a dynamic resource allocation and client scheduling (DRACS) algorithm based on Lyapunov optimization to maximize the training data size under energy consumption constraints, by jointly optimizing the allocation of communication, computation, and energy resources. For the DRACS algorithm, we characterize a trade-off of [$\mathcal{O}(1/V)$, $\mathcal{O}(\sqrt{V})$] between the training data size and energy consumption to balance the maximization of training data size and the minimization of energy consumption. Our experimental results show that the DRACS algorithm can provide both higher learning accuracy and faster convergence with limited time and energy based on the MNIST and Fashion-MNIST datasets.
翻译:已经广泛研究了作为在网络边缘以分布方式处理数据的一种有希望的技术的链条辅助联合学习(BFL),但是,BFL的研究涉及关键的挑战,包括资源分配和客户日程安排。在本文件中,我们提议了一个由多个客户组成的BFL框架,客户的作用包括当地模式培训、无线上传和每轮的整块采矿。首先,我们开发了一个更新BFL框架,以在时间变化的淡化渠道下研究长期系统绩效。第二,为了以有限的通信、计算和能源资源加速BFL进程,我们提议根据Lyapunov优化办法进行动态资源分配和客户日程安排(DRACS)算法,以最大限度地扩大能源消耗限制下的培训数据规模,通过共同优化通信、计算和能源资源的分配。关于DRACS算法,我们将培训数据规模和能源消耗的更高精确度与我们基于培训的精确度和最低能源消费水平的学习结果联系起来。