We present a novel coded federated learning (FL) scheme for linear regression that mitigates the effect of straggling devices while retaining the privacy level of conventional FL. The proposed scheme combines one-time padding to preserve privacy and gradient codes to yield resiliency against stragglers and consists of two phases. In the first phase, the devices share a one-time padded version of their local data with a subset of other devices. In the second phase, the devices and the central server collaboratively and iteratively train a global linear model using gradient codes on the one-time padded local data. To apply one-time padding to real data, our scheme exploits a fixed-point arithmetic representation of the data. Unlike the coded FL scheme recently introduced by Prakash \emph{et al.}, the proposed scheme maintains the same level of privacy as conventional FL while achieving a similar training time. Compared to conventional FL, we show that the proposed scheme achieves a training speed-up factor of $6.6$ and $9.2$ on the MNIST and Fashion-MNIST datasets for an accuracy of $95\%$ and $85\%$, respectively.
翻译:我们提出了一个新型的编码化联邦学习(FL)线性回归计划,该计划减轻了悬浮装置的影响,同时保留了常规FL的隐私水平。 拟议的计划将一次性的垫面保护隐私和梯度代码结合起来,以产生对累加器的耐应性,由两个阶段组成。 在第一阶段,该装置与其他设备的一个子组共享其本地数据的一次性加载版本。 在第二阶段,该装置和中央服务器合作和迭接地用一次性附加当地数据的梯度编码来培训全球线性模型。 为了对真实数据一次性加载,我们的计划利用了固定点算术代表数据。 与最近由Prakash \emph{et al.}推出的编码FL计划不同,拟议的计划保持了与常规FL相同的隐私水平,同时实现了类似的培训时间。 与传统的FL相比,我们表明,拟议的计划在MNIST和Fashon-MINST数据中分别达到9.6美元和9.2美元的培训增速系数,分别达到95美元和35美元的数据的精确度。