Federated learning (FL) provides an effective machine learning (ML) architecture to protect data privacy in a distributed manner. However, the inevitable network asynchrony, the over-dependence on a central coordinator, and the lack of an open and fair incentive mechanism collectively hinder its further development. We propose \textsc{IronForge}, a new generation of FL framework, that features a Directed Acyclic Graph (DAG)-based data structure and eliminates the need for central coordinators to achieve fully decentralized operations. \textsc{IronForge} runs in a public and open network, and launches a fair incentive mechanism by enabling state consistency in the DAG, so that the system fits in networks where training resources are unevenly distributed. In addition, dedicated defense strategies against prevalent FL attacks on incentive fairness and data privacy are presented to ensure the security of \textsc{IronForge}. Experimental results based on a newly developed testbed FLSim highlight the superiority of \textsc{IronForge} to the existing prevalent FL frameworks under various specifications in performance, fairness, and security. To the best of our knowledge, \textsc{IronForge} is the first secure and fully decentralized FL framework that can be applied in open networks with realistic network and training settings.
翻译:联邦学习(FL)提供了有效的机器学习(ML)架构,以分布方式保护数据隐私。然而,不可避免的网络无节制,过度依赖中央协调员,以及缺乏开放和公平的激励机制,共同阻碍其进一步发展。我们提出了新一代FL框架,即基于直接循环图(DAG)的数据结构,并消除了中央协调员实现完全分散运作的必要性。\textsc{IronForge}在公共和开放的网络中运作,并启动公平的激励机制,使DAG中的国家一致性,使该系统适合培训资源分配不均的网络。此外,我们提出了针对FL对激励公平和数据隐私的普遍攻击的专门防御战略,以确保基于新开发的测试式FLSIM(DAG)的数据安全。基于测试式FLSIM(Textc{IronFORge )的实验结果,突出了在业绩、公平、公正和安全方面的各种规格下的现有流行的FL框架。我们的最佳知识框架可以在现实的环境下全面应用。