The blockchain technology has been extensively studied to enable distributed and tamper-proof data processing in federated learning (FL). Most existing blockchain assisted FL (BFL) frameworks have employed a third-party blockchain network to decentralize the model aggregation process. However, decentralized model aggregation is vulnerable to pooling and collusion attacks from the third-party blockchain network. Driven by this issue, we propose a novel BFL framework that features the integration of training and mining at the client side. To optimize the learning performance of FL, we propose to maximize the long-term time average (LTA) training data size under a constraint of LTA energy consumption. To this end, we formulate a joint optimization problem of training client selection and resource allocation (i.e., the transmit power and computation frequency at the client side), and solve the long-term mixed integer non-linear programming based on a Lyapunov technique. In particular, the proposed dynamic resource allocation and client scheduling (DRACS) algorithm can achieve a trade-off of [$\mathcal{O}(1/V)$, $\mathcal{O}(\sqrt{V})$] to balance the maximization of the LTA training data size and the minimization of the LTA energy consumption with a control parameter $V$. Our experimental results show that the proposed DRACS algorithm achieves better learning accuracy than benchmark client scheduling strategies with limited time or energy consumption.
翻译:已广泛研究了链链技术,以便在联合学习(FL)中进行分配和防伪的数据处理; 多数现有的链链协助FL(FFFL)框架已采用第三方连锁网网络,将模型集成下放到模式集成过程的分散化; 然而,分散式的模型集成很容易被第三方连锁网的攻击集中和串通; 受这一问题驱使,我们提出了一个新的BFL框架,将培训和采矿结合起来; 为了优化FL的学习绩效,我们提议在LTA能源消耗的制约下,尽量扩大长期平均(LTA)培训数据规模; 为此,我们提出了一个联合优化问题,即培训客户选择和资源分配(即客户方的电力传输和计算频率),并解决长期混合的混合型非线性编程,以Lyapunov技术为基础,我们提出了新的BLapunov资源配置和客户列表(DRACS)算法, 以最大程度的消费成本(美元)和最低程度的Lratxxxxxalalal), 显示我们消费成本的客户学习基准(美元) 和最大程度的学习成本值的进度,从而显示我们数据库的客户的进度表,从而显示最佳的进度,从而显示最佳的客户的能源成本值的进度的进度。