Distributed privacy-preserving regression schemes have been developed and extended in various fields, where multiparty collaboratively and privately run optimization algorithms, e.g., Gradient Descent, to learn a set of optimal parameters. However, traditional Gradient-Descent based methods fail to solve problems which contains objective functions with L1 regularization, such as Lasso regression. In this paper, we present Federated Coordinate Descent, a new distributed scheme called FCD, to address this issue securely under multiparty scenarios. Specifically, through secure aggregation and added perturbations, our scheme guarantees that: (1) no local information is leaked to other parties, and (2) global model parameters are not exposed to cloud servers. The added perturbations can eventually be eliminated by each party to derive a global model with high performance. We show that the FCD scheme fills the gap of multiparty secure Coordinate Descent methods and is applicable for general linear regressions, including linear, ridge and lasso regressions. Theoretical security analysis and experimental results demonstrate that FCD can be performed effectively and efficiently, and provide as low MAE measure as centralized methods under tasks of three types of linear regressions on real-world UCI datasets.
翻译:在不同领域制定并扩展了分散的隐私保护回归计划,在各个领域,多党合作和私人运行的优化算法,如 " 梯子 ",以学习一套最佳参数;然而,传统的 " 梯子 " 法未能解决L1正规化中包含客观功能的问题,如Lasso回归;在本文件中,我们介绍了名为 " FCD " 的一个新的分布式计划,即 " FCD ",以便在多党情景下安全解决这一问题。具体地说,通过安全汇总和增加扰动,我们的计划保证:(1) 当地信息不泄露给其他政党,(2) 全球模型参数不暴露于云层服务器。每个政党最终都可以消除添加的扰动,以获得高性能的全球模型。我们表明,FCD计划填补了多党安全血统协调方法的空白,适用于一般线性回归,包括线性、脊脊柱和拉索回归。理论安全分析和实验结果表明,FCD可以有效和高效地进行,并作为核心方法,在现实世界数据回归的三种类型任务下提供低MAE测量。