Federated learning (FL) has been promoted as a popular technique for training machine learning (ML) models over edge/fog networks. Traditional implementations of FL have largely neglected the potential for inter-network cooperation, treating edge/fog devices and other infrastructure participating in ML as separate processing elements. Consequently, FL has been vulnerable to several dimensions of network heterogeneity, such as varying computation capabilities, communication resources, data qualities, and privacy demands. We advocate for cooperative federated learning (CFL), a cooperative edge/fog ML paradigm built on device-to-device (D2D) and device-to-server (D2S) interactions. Through D2D and D2S cooperation, CFL counteracts network heterogeneity in edge/fog networks through enabling a model/data/resource pooling mechanism, which will yield substantial improvements in ML model training quality and network resource consumption. We propose a set of core methodologies that form the foundation of D2D and D2S cooperation and present preliminary experiments that demonstrate their benefits. We also discuss new FL functionalities enabled by this cooperative framework such as the integration of unlabeled data and heterogeneous device privacy into ML model training. Finally, we describe some open research directions at the intersection of cooperative edge/fog and FL.
翻译:联邦学习(FL)作为培训机器学习(ML)模式超越边缘/玻璃网络的流行技术得到推广,传统应用FL基本上忽视了网络间合作的潜力,将边缘/玻璃装置和参加ML的其他基础设施作为单独的加工要素处理,因此,FL容易受到网络差异性的若干方面的影响,如不同的计算能力、通信资源、数据质量和隐私需求等;我们提倡合作联邦学习(CFL),一种建立在设备对设备(D2D)和装置对服务器(D2S)互动上的合作边/玻璃 ML模式;通过D2D和D2S合作,CFL通过一个模型/数据/资源共享机制来抵消边缘/玻璃网络的网络异质性,这将大大改进ML模式培训的模型质量和网络资源消耗;我们提出一套核心方法,作为D2D和D2S合作的基础,并展示其益处;我们还讨论了通过这一合作框架促成的新的FL功能,如在F节点上将一些开放的保密性研究方向和F节式数据整合到F节点。</s>