We propose near-optimal overlay networks based on $d$-regular expander graphs to accelerate decentralized federated learning (DFL) and improve its generalization. In DFL a massive number of clients are connected by an overlay network, and they solve machine learning problems collaboratively without sharing raw data. Our overlay network design integrates spectral graph theory and the theoretical convergence and generalization bounds for DFL. As such, our proposed overlay networks accelerate convergence, improve generalization, and enhance robustness to clients failures in DFL with theoretical guarantees. Also, we present an efficient algorithm to convert a given graph to a practical overlay network and maintaining the network topology after potential client failures. We numerically verify the advantages of DFL with our proposed networks on various benchmark tasks, ranging from image classification to language modeling using hundreds of clients.
翻译:我们建议基于美元定期扩张图的近最佳重叠网络,以加快分散化的联邦学习(DFL)并改进其普及性。在DFL中,大量客户通过一个重叠网络连接起来,在不共享原始数据的情况下合作解决机器学习问题。我们的重叠网络设计结合了光谱图理论以及DFL的理论趋同和概括性界限。因此,我们提议的重叠网络加快了对DFL客户失败的趋同,改进了一般化,并用理论担保加强了对客户失败的稳健性。此外,我们提出了一个有效的算法,将给定的图表转换成一个实用的重叠网络,并在潜在客户失败后维持网络的地形学。我们用数字核查DFL的优势和我们提议的网络在从图像分类到用数百个客户进行语言建模等各种基准任务方面的优势。