Federated learning is a semi-distributed algorithm, where a server communicates with multiple dispersed clients to learn a global model. The federated architecture is not robust and is sensitive to communication and computational overloads due to its one-master multi-client structure. It can also be subject to privacy attacks targeting personal information on the communication links. In this work, we introduce graph federated learning (GFL), which consists of multiple federated units connected by a graph. We then show how graph homomorphic perturbations can be used to ensure the algorithm is differentially private. We conduct both convergence and privacy theoretical analyses and illustrate performance by means of computer simulations.
翻译:联邦学习是一种半分布式算法, 服务器与多个分散客户进行通信, 以学习全球模型。 联邦结构不健全, 因其一流的多客户结构, 对通信和计算超负荷敏感。 联邦学习还可能受到针对通信链接个人信息的隐私攻击。 在这项工作中, 我们引入了图形联合学习( GFL ), 由多个通过图表连接的联邦单位组成。 然后我们展示如何使用图形同质性扰动来确保该算法具有差异性私密性 。 我们同时进行趋同和隐私理论分析, 并通过计算机模拟来说明其性能 。