In order to perform machine learning among multiple parties while protecting the privacy of raw data, privacy-preserving machine learning based on secure multi-party computation (MPL for short) has been a hot spot in recent. The configuration of MPL usually follows the peer-to-peer architecture, where each party has the same chance to reveal the output result. However, typical business scenarios often follow a hierarchical architecture where a powerful, usually privileged party, leads the tasks of machine learning. Only the privileged party can reveal the final model even if other assistant parties collude with each other. It is even required to avoid the abort of machine learning to ensure the scheduled deadlines and/or save used computing resources when part of assistant parties drop out. Motivated by the above scenarios, we propose pMPL, a robust MPL framework with a privileged part}. pMPL supports three-party training in the semi-honest setting. By setting alternate shares for the privileged party, pMPL is robust to tolerate one of the rest two parties dropping out during the training. With the above settings, we design a series of efficient protocols based on vector space secret sharing for pMPL to bridge the gap between vector space secret sharing and machine learning. Finally, the experimental results show that the performance of pMPL is promising when we compare it with the state-of-the-art MPL frameworks. Especially, in the LAN setting, pMPL is around $16\times$ and $5\times$ faster than TF-encrypted (with ABY3 as the back-end framework) for the linear regression, and logistic regression, respectively. Besides, the accuracy of trained models of linear regression, logistic regression, and BP neural networks can reach around 97%, 99%, and 96% on MNIST dataset respectively.
翻译:为了在多个当事方之间进行机器学习,同时保护原始数据的隐私,隐私保存机学习在安全的多方计算(MPL短时间)的基础上,是最近的一个热点。MPL的配置通常遵循同行对方架构,每个当事方都有同样的机会披露输出结果。然而,典型的商业情景往往遵循一个等级架构,其中权力强大、通常享有特权的一方领导机器学习的任务。只有特权方可以披露最终模式,即使其他助理方相互勾结,也能够披露最后模式。甚至需要避免机器学习中断,以确保预定的最后期限和(或)在助理方部分退出时保存使用过的计算资源。受上述假设的驱动,我们提议PMPL,这是一个强有力的MPL框架,在半诚实的环境下支持三方培训。通过为特权方设定替代股份,PMPL能够容忍其余的两方在培训中退出。在以上设置时,我们设计了一系列高效的协议,基于矢量空间秘密共享的内值,在 PMPL 服务器内部网络内部共享 PMPL 将最终的运行结果进行。