We present a novel coded federated learning (FL) scheme for linear regression that mitigates the effect of straggling devices while retaining the privacy level of conventional FL. The proposed scheme combines one-time padding to preserve privacy and gradient codes to yield resiliency against stragglers and consists of two phases. In the first phase, the devices share a one-time padded version of their local data with a subset of other devices. In the second phase, the devices and the central server collaboratively and iteratively train a global linear model using gradient codes on the one-time padded local data. To apply one-time padding to real data, our scheme exploits a fixed-point arithmetic representation of the data. Unlike the coded FL scheme recently introduced by Prakash et al., the proposed scheme maintains the same level of privacy as conventional FL while achieving a similar training time. Compared to conventional FL, we show that the proposed scheme achieves a training speed-up factor of $6.6$ and $9.2$ on the MNIST and Fashion-MNIST datasets for an accuracy of $95\%$ and $85\%$, respectively.
翻译:我们提出了一个新型的编码化联邦学习(FL)线性回归计划,该计划减轻了悬浮装置的影响,同时保留了传统FL的隐私水平。这个计划将保护隐私和梯度代码的一次性挂牌组合起来,以便产生抵御累加器的复原能力,由两个阶段组成。在第一阶段,该设备与其他设备的一个子集共享其本地数据的一次性加插版。在第二阶段,该设备和中央服务器合作和迭接地培训了一个全球线性模型,使用一次性附加当地数据的梯度代码。为了对真实数据进行一次性挂接,我们的计划利用了数据的固定点算术代表。与Prakash等人最近推出的编码FL计划不同,该拟议计划保持了与常规FL相同的隐私水平,同时实现了类似的培训时间。与常规FL相比,我们表明,拟议的计划在MNIST和时装-MNIST数据集上分别实现了65美元和9.2美元的培训速率系数,分别达到95美元和85美元。