Motivated by the heterogeneous nature of devices participating in large-scale Federated Learning (FL) optimization, we focus on an asynchronous server-less FL solution empowered by blockchain technology. In contrast to mostly adopted FL approaches, which assume synchronous operation, we advocate an asynchronous method whereby model aggregation is done as clients submit their local updates. The asynchronous setting fits well with the federated optimization idea in practical large-scale settings with heterogeneous clients. Thus, it potentially leads to higher efficiency in terms of communication overhead and idle periods. To evaluate the learning completion delay of BC-enabled FL, we provide an analytical model based on batch service queue theory. Furthermore, we provide simulation results to assess the performance of both synchronous and asynchronous mechanisms. Important aspects involved in the BC-enabled FL optimization, such as the network size, link capacity, or user requirements, are put together and analyzed. As our results show, the synchronous setting leads to higher prediction accuracy than the asynchronous case. Nevertheless, asynchronous federated optimization provides much lower latency in many cases, thus becoming an appealing solution for FL when dealing with large datasets, tough timing constraints (e.g., near-real-time applications), or highly varying training data.
翻译:基于参与大型联邦学习(FL)优化的装置的多样化性质,我们注重的是一种无同步服务器无FL的解决方案,由链链技术授权;与大多数采用的FL方法相比,我们提倡一种无同步方法,即采用同步操作,模型汇总在客户提交本地更新时进行;无同步设置在与不同客户的大型实际环境中与联合优化理念相适应,因此,它有可能提高通信间接费用和闲置期的效率;为评价BC启动的FL的学习延迟,我们提供基于批量服务排队理论的分析模型;此外,我们提供模拟结果,以评估同步和不同步机制的性能;对BC启动的FL优化所涉及的重要方面,如网络规模、连接能力或用户要求进行整合和分析;正如我们的结果所示,同步设置导致预测准确性高于同步案例。然而,同步优化后,基于批量服务排队列理论的分析模式提供了一种分析模型,用以评估同步和不同步机制的性能性能;因此,对于大量数据应用程序而言,具有吸引力,同步优化性能提供更差得多的紧紧紧紧的固定性数据。