The state-of-the-art federated learning brings a new direction for the data privacy protection of mobile crowdsensing machine learning applications. However, besides being vulnerable to GAN based user data construction attack, the existing gradient descent based federate learning schemes are lack of consideration for how to preserve the model privacy. In this paper, we propose a secret sharing based federated extreme boosting learning frame-work (FedXGB) to achieve privacy-preserving model training for mobile crowdsensing. First, a series of protocols are designed to implement privacy-preserving extreme gradient boosting of classification and regression tree. The protocols preserve the user data privacy protection feature of federated learning that XGBoost is trained without revealing plaintext user data. Then, in consideration of the high commercial value of a well-trained model, a secure prediction protocol is developed to protect the model privacy for the crowdsensing sponsor. Additionally, we operate comprehensive theoretical analysis and extensive experiments to evaluate the security, effectiveness and efficiency of FedXGB. The results show that FedXGB is secure in the honest-but-curious model, and attains approximate accuracy and convergence rate with the original model in low runtime.
翻译:最新的联合学习为移动人群感化机学习应用的数据隐私保护带来了新的方向。然而,除了容易受到基于GAN的用户数据构建攻击之外,现有的基于梯度的基于血统的联合会学习计划对如何保护模型隐私缺乏考虑。在本文中,我们建议采用基于秘密共享的基于联合会的极端促进学习框架工作(FedXGB),以实现移动人群感化的隐私保护模式培训。首先,一系列协议旨在实施保密极端梯度的分类和回归树增殖。协议保留了基于GAN的用户数据构建中的用户数据隐私保护特征,即XGBoost在培训时没有披露简便用户数据。随后,考虑到经过良好培训的模式的高商业价值,我们制定了安全预测协议,以保护人群感化赞助人的模型隐私。此外,我们开展了全面的理论分析和广泛的实验,以评价FedXGB的安全、效力和效率。结果显示,FXGB在诚实但可靠的模型中是安全的,并实现了与原始模型在低运行时间的近准确率和汇合率。