Federated learning (FL) enables a loose set of participating clients to collaboratively learn a global model via coordination by a central server and with no need for data sharing. Existing FL approaches that rely on complex algorithms with massive models, such as deep neural networks (DNNs), suffer from computation and communication bottlenecks. In this paper, we first propose FedHDC, a federated learning framework based on hyperdimensional computing (HDC). FedHDC allows for fast and light-weight local training on clients, provides robust learning, and has smaller model communication overhead compared to learning with DNNs. However, current HDC algorithms get poor accuracy when classifying larger & more complex images, such as CIFAR10. To address this issue, we design FHDnn, which complements FedHDC with a self-supervised contrastive learning feature extractor. We avoid the transmission of the DNN and instead train only the HDC learner in a federated manner, which accelerates learning, reduces transmission cost, and utilizes the robustness of HDC to tackle network errors. We present a formal analysis of the algorithm and derive its convergence rate both theoretically, and show experimentally that FHDnn converges 3$\times$ faster vs. DNNs. The strategies we propose to improve the communication efficiency enable our design to reduce communication costs by 66$\times$ vs. DNNs, local client compute and energy consumption by ~1.5 - 6$\times$, while being highly robust to network errors. Finally, our proposed strategies for improving the communication efficiency have up to 32$\times$ lower communication costs with good accuracy.
翻译:暂无翻译