Federated learning (FL) enables edge devices to collaboratively learn a model in a distributed fashion. Many existing researches have focused on improving communication efficiency of high-dimensional models and addressing bias caused by local updates. However, most of FL algorithms are either based on reliable communications or assume fixed and known unreliability characteristics. In practice, networks could suffer from dynamic channel conditions and non-deterministic disruptions, with time-varying and unknown characteristics. To this end, in this paper we propose a sparsity enabled FL framework with both communication efficiency and bias reduction, termed as SAFARI. It makes novel use of a similarity among client models to rectify and compensate for bias that is resulted from unreliable communications. More precisely, sparse learning is implemented on local clients to mitigate communication overhead, while to cope with unreliable communications, a similarity-based compensation method is proposed to provide surrogates for missing model updates. We analyze SAFARI under bounded dissimilarity and with respect to sparse models. It is demonstrated that SAFARI under unreliable communications is guaranteed to converge at the same rate as the standard FedAvg with perfect communications. Implementations and evaluations on CIFAR-10 dataset validate the effectiveness of SAFARI by showing that it can achieve the same convergence speed and accuracy as FedAvg with perfect communications, with up to 80% of the model weights being pruned and a high percentage of client updates missing in each round.
翻译:联邦学习(FL)使边际设备能够以分布式的方式合作学习模型。许多现有研究侧重于提高高维模型的通信效率和消除当地更新导致的偏差,但大多数FL算法要么基于可靠的通信,要么具有固定和已知的不可靠性特征。在实践中,网络可能因动态的频道条件和非决定性的干扰而受害,具有时间变化和未知的特点。为此,我们在本文件中提议一个宽度使FL框架,既能降低通信效率和偏见,又称为SAFARRI。它以新颖的客户模式来纠正和弥补不可靠的通信造成的偏差。更准确地说,对当地客户的学习很少,以缓解通信间接费用,同时应对不可靠的通信,建议一种类似的基于补偿方法,为缺失的模型更新提供猜测数据。我们分析了SAFARRI的不相近似之处和模式的稀薄模式。我们发现,SFARI的通信保证与FAV的标准化速度相同,其实现80-10的精确度,使FARA的客户数据更新速度达到80的精确度。