With the rapid growth in mobile computing, massive amounts of data and computing resources are now located at the edge. To this end, Federated learning (FL) is becoming a widely adopted distributed machine learning (ML) paradigm, which aims to harness this expanding skewed data locally in order to develop rich and informative models. In centralized FL, a collection of devices collaboratively solve a ML task under the coordination of a central server. However, existing FL frameworks make an over-simplistic assumption about network connectivity and ignore the communication bandwidth of the different links in the network. In this paper, we present and study a novel FL algorithm, in which devices mostly collaborate with other devices in a pairwise manner. Our nonparametric approach is able to exploit network topology to reduce communication bottlenecks. We evaluate our approach on various FL benchmarks and demonstrate that our method achieves 10X better communication efficiency and around 8% increase in accuracy compared to the centralized approach.
翻译:随着移动计算机的快速增长,大量数据和计算资源现在处于边缘。为此,联邦学习(FL)正在成为一个广泛采用的分布式机器学习模式,其目的是在当地利用这一不断扩大的扭曲数据,以开发丰富和丰富的信息模型。在中央FL,一个设备集在中央服务器的协调下合作解决ML任务。然而,现有的FL框架对网络连通性作出了过于简单化的假设,忽视了网络中不同链接的通信带宽。在本文中,我们介绍并研究了一个新的FL算法,其中设备大多以对称方式与其他设备合作。我们的非对称方法能够利用网络表层来减少通信瓶颈。我们评估了各种FL基准的方法,并表明我们的方法实现了10X更好的通信效率,比集中方法增加了大约8%的准确度。