With the increasing demands for privacy protection, privacy-preserving machine learning has been drawing much attention in both academia and industry. However, most existing methods have their limitations in practical applications. On the one hand, although most cryptographic methods are provable secure, they bring heavy computation and communication. On the other hand, the security of many relatively efficient private methods (e.g., federated learning and split learning) is being questioned, since they are non-provable secure. Inspired by previous work on privacy-preserving machine learning, we build a privacy-preserving machine learning framework by combining random permutation and arithmetic secret sharing via our compute-after-permutation technique. Since our method reduces the cost for element-wise function computation, it is more efficient than existing cryptographic methods. Moreover, by adopting distance correlation as a metric for privacy leakage, we demonstrate that our method is more secure than previous non-provable secure methods. Overall, our proposal achieves a good balance between security and efficiency. Experimental results show that our method not only is up to 6x faster and reduces up to 85% network traffic compared with state-of-the-art cryptographic methods, but also leaks less privacy during the training process compared with non-provable secure methods.
翻译:由于对隐私保护的需求不断增加,保护隐私的机器学习在学术界和工业界引起了人们的极大注意。然而,大多数现有方法在实际应用方面都有其局限性。一方面,尽管大多数加密方法都可安全地进行,但它们带来了沉重的计算和通信。另一方面,许多相对高效的私人方法(例如,联合会学习和分解学习)的安全性受到质疑,因为它们是不安全的。在以前关于隐私保护机器学习的工作的启发下,我们通过随机调整和算术秘密共享相结合,建立了保护隐私的机器学习框架。一方面,虽然大多数加密方法可以降低元素功能计算的成本,但这种方法比现有的加密方法更有效率。此外,通过采用远程关系作为隐私泄漏的衡量标准,我们证明我们的方法比以前非可预见的安全方法更安全。总体而言,我们的提案在安全和效率之间实现了良好的平衡。实验结果表明,我们的方法不仅高达6x速度,而且通过不那么降低到85%的网络交通流量,在州-保密培训方法中也比不保密的保密方法低。