The ultra-low latency requirements of 5G/6G applications and privacy constraints call for distributed machine learning systems to be deployed at the edge. With its simple yet effective approach, federated learning (FL) is proved to be a natural solution for massive user-owned devices in edge computing with distributed and private training data. Most vanilla FL algorithms based on FedAvg follow a naive star topology, ignoring the heterogeneity and hierarchy of the volatile edge computing architectures and topologies in reality. In this paper, we conduct a comprehensive survey on the existing work of optimized FL models, frameworks, and algorithms with a focus on their network topologies. After a brief recap of FL and edge computing networks, we introduce various types of edge network topologies, along with the optimizations under the aforementioned network topologies. Lastly, we discuss the remaining challenges and future works for applying FL in topology-specific edge networks.
翻译:5G/6G 应用程序和隐私限制的超低延迟要求要求在边缘部署分布式机器学习系统。以简单而有效的方法,联邦学习(FL)被证明是使用分布式和私人培训数据进行边缘计算中大规模用户拥有的设备的一种自然解决方案。基于 FedAvg 的多数香草FL算法遵循天真的恒星学,忽视了现实中挥发性边缘计算结构和地形结构的异质性和等级。在本文中,我们全面调查了优化的FL模型、框架和算法的现有工作,重点是其网络型态。在简要回顾FL和边缘计算网络之后,我们引入了各种类型的边缘网络结构,以及上述网络结构下的优化。最后,我们讨论了在特定表层边缘网络中应用FL的剩余挑战和未来工作。