We present two novel coded federated learning (FL) schemes for linear regression that mitigate the effect of straggling devices. The first scheme, CodedPaddedFL, mitigates the effect of straggling devices while retaining the privacy level of conventional FL. Particularly, it combines one-time padding for user data privacy with gradient codes to yield resiliency against straggling devices. To apply one-time padding to real data, our scheme exploits a fixed-point arithmetic representation of the data. For a scenario with 25 devices, CodedPaddedFL achieves a speed-up factor of 6.6 and 9.2 for an accuracy of 95\% and 85\% on the MMIST and Fashion-MNIST datasets, respectively, compared to conventional FL. Furthermore, it yields similar performance in terms of latency compared to a recently proposed scheme by Prakash \emph{et al.} without the shortcoming of additional leakage of private data. The second scheme, CodedSecAgg, provides straggler resiliency and robustness against model inversion attacks and is based on Shamir's secret sharing. CodedSecAgg outperforms state-of-the-art secure aggregation schemes such as LightSecAgg by a speed-up factor of 6.6--14.6, depending on the number of colluding devices, on the MNIST dataset for a scenario with 120 devices, at the expense of a 30\% increase in latency compared to CodedPaddedFL.
翻译:我们提出了两个新颖的编码化联邦学习(FL)计划,用于减轻悬浮装置效应的线性回归。第一个方案是代码化PaddadlFL(CoddPaddplad FlFL),在保持常规FL的隐私水平的同时,减轻悬浮装置的效果,同时保持常规FL的隐私水平。特别是,它将用户数据隐私一次性挂贴板与梯度代码相结合,以便产生对悬浮装置的适应性。为了对真实数据采用一次性挂贴板,我们的计划利用了数据的固定点算术表示。对于有25个装置的假设,编码化PaddPaddlFl(CocodSecdPlock)实现了6.6%和9.2%的加速系数,精确度分别为95 ⁇ 和85 ⁇ 和85 ⁇,与常规Fashont-MNIST数据集组合。此外,它与Prakash emph* eph* al.}拟议计划相比,在不缺私人数据额外渗漏的情况下,我们的计划利用了数据。第二个方案,代码化Scod-Sec-PeAggglex(cal-de-demod)提供了Sec-de-demod sqs sqmal sqmal squistmal squimal desmal) supplemal cod codeal cod cod cod cod cod cod cod code。