This paper aims to address the major challenges of Federated Learning (FL) on edge devices: limited memory and expensive communication. We propose a novel method, called Partial Variable Training (PVT), that only trains a small subset of variables on edge devices to reduce memory usage and communication cost. With PVT, we show that network accuracy can be maintained by utilizing more local training steps and devices, which is favorable for FL involving a large population of devices. According to our experiments on two state-of-the-art neural networks for speech recognition and two different datasets, PVT can reduce memory usage by up to 1.9$\times$ and communication cost by up to 593$\times$ while attaining comparable accuracy when compared with full network training.
翻译:本文旨在讨论联邦学习联盟在边缘装置方面的主要挑战:记忆有限和通信费用昂贵;我们建议采用称为部分变数培训(PVT)的新颖方法,即只培训边缘装置的一小部分变数以减少记忆使用和通信费用;使用PVT,我们表明网络的准确性可以通过使用更多的当地培训步骤和装置来维持,这有利于使用大量设备的FL。根据我们对两个最先进的语音识别神经网络和两个不同的数据集的实验,PVT可以将记忆使用减少1.9亿美元的时间,通信费用减少5.93亿美元,而与整个网络培训相比,通信费用则可以达到类似的准确性。