Motivated by the heterogeneous nature of devices participating in large-scale Federated Learning (FL) optimization, we focus on an asynchronous server-less FL solution empowered by Blockchain (BC) technology. In contrast to mostly adopted FL approaches, which assume synchronous operation, we advocate an asynchronous method whereby model aggregation is done as clients submit their local updates. The asynchronous setting fits well with the federated optimization idea in practical large-scale settings with heterogeneous clients. Thus, it potentially leads to higher efficiency in terms of communication overhead and idle periods. To evaluate the learning completion delay of BC-enabled FL, we provide an analytical model based on batch service queue theory. Furthermore, we provide simulation results to assess the performance of both synchronous and asynchronous mechanisms. Important aspects involved in the BC-enabled FL optimization, such as the network size, link capacity, or user requirements, are put together and analyzed. As our results show, the synchronous setting leads to higher prediction accuracy than the asynchronous case. Nevertheless, asynchronous federated optimization provides much lower latency in many cases, thus becoming an appealing FL solution when dealing with large data sets, tough timing constraints (e.g., near-real-time applications), or highly varying training data.
翻译:由于参与大型联邦学习(FL)优化的装置的多样化性质,我们注重的是由BC链(BC)技术授权的无同步服务器FL解决方案。与大多数采用的FL方法相比,我们提倡一种无同步方法,通过客户提交本地更新信息来进行模型汇总;无同步环境与在实际大规模环境中与混合优化理念非常吻合,因此,它有可能提高通信间接费用和闲置时间的效率。为了评价BC启动的FL的学习延迟,我们提供了基于批量服务排队理论的分析模型。此外,我们提供模拟结果,以评估同步和不同步机制的性能。BC启动的FL优化所涉及的重要方面,如网络规模、链接能力或用户要求,被合并和分析。正如我们的结果所示,同步环境导致预测准确性高于同步案例。然而,由于同步化的FLLL的紧凑性硬性硬性能优化,许多情况下的紧凑性硬性定时,数据优化与高超低的定时的定时势性能。