We propose SwiftAgg+, a novel secure aggregation protocol for federated learning systems, where a central server aggregates local models of $N \in \mathbb{N}$ distributed users, each of size $L \in \mathbb{N}$, trained on their local data, in a privacy-preserving manner. SwiftAgg+ can significantly reduce the communication overheads without any compromise on security, and achieve optimal communication loads within diminishing gaps. Specifically, in presence of at most $D$ dropout users, SwiftAgg+ achieves a per-user communication load of $(1+\mathcal{O}(\frac{1}{N}))L$ and a server communication load of $(1+\mathcal{O}(\frac{1}{N}))L$, with a worst-case information-theoretic security guarantee, against any subset of up to $T$ semi-honest users who may also collude with the curious server. Moreover, the proposed SwiftAgg+ allows for a flexible trade-off between communication loads and the number of active communication links. In particular, for any $K\in\mathbb{N}$, SwiftAgg+ can achieve the server communication load of $(1+\frac{T}{K})L$, and per-user communication load of up to $(1+\frac{T+D}{K})L$, where the number of pair-wise active connections in the network is $\frac{N}{2}(K+T+D+1)$.
翻译:我们提出SwiftAgg+,这是联邦学习系统的新式安全聚合协议。 在这种协议中,中央服务器以隐私保护的方式,对每个大小的用户进行本地数据培训,每个大小的用户,每个大小的美元\马特布{N}美元。 SwiftAgg+可以大量减少通信费,而不会在安全方面有任何妥协,并在缩小差距的范围内实现最佳通信负荷。具体地说,在最多为D美元的辍学用户在场的情况下,SwiftAgg+每个用户的通信负荷为$(1\马特卡{O}(frac{1\N}美元)L$(frac{美元)的当地模式,每个大小的用户的当地模式为$(1\马布{N}(美元)美元)的当地模式。 SwiftAgg+卡通量值的网络和服务器的固定的通信数量 。