Privacy-preserving machine learning (PPML) aims at enabling machine learning (ML) algorithms to be used on sensitive data. We contribute to this line of research by proposing a framework that allows efficient and secure evaluation of full-fledged state-of-the-art ML algorithms via secure multi-party computation (MPC). This is in contrast to most prior works, which substitute ML algorithms with approximated "MPC-friendly" variants. A drawback of the latter approach is that fine-tuning of the combined ML and MPC algorithms is required, which might lead to less efficient algorithms or inferior quality ML. This is an issue for secure deep neural networks (DNN) training in particular, as this involves arithmetic algorithms thought to be "MPC-unfriendly", namely, integer division, exponentiation, inversion, and square root. In this work, we propose secure and efficient protocols for the above seemingly MPC-unfriendly computations. Our protocols are three-party protocols in the honest-majority setting, and we propose both passively secure and actively secure with abort variants. A notable feature of our protocols is that they simultaneously provide high accuracy and efficiency. This framework enables us to efficiently and securely compute modern ML algorithms such as Adam and the softmax function "as is", without resorting to approximations. As a result, we obtain secure DNN training that outperforms state-of-the-art three-party systems; our full training is up to 6.7 times faster than just the online phase of the recently proposed FALCON@PETS'21 on a standard benchmark network. We further perform measurements on real-world DNNs, AlexNet and VGG16. The performance of our framework is up to a factor of about 12-14 faster for AlexNet and 46-48 faster for VGG16 to achieve an accuracy of 70% and 75%, respectively, when compared to FALCON.
翻译:保护隐私机器学习( PPML) 的目的是使机器学习(ML) 算法能够用于敏感数据。 我们通过提出一个框架,允许通过安全的多党计算(MPC)来高效和可靠地评估成熟的最新 ML算法(MPC ), 从而推动这一研究线。 这与大多数先前的工程形成鲜明对照,这些工程用接近“MPC友好”的变体来替代ML算法。 后一种方法的一个缺点是需要微调ML和MPC 混合算法(ML), 这可能导致效率较低或质量低的 ML。 特别是, 这涉及到安全性AL 更快速的 NNUR 网络(DNNN), 因为这涉及到算算算术“MPC-un友好”, 整分数分数分数, 和平方根。 在这项工作中,我们提出安全、高效的 MC-C-C不友好的计算系统协议。 我们的协议是诚实的三方协议, 我们提议以被动和积极的方式, 升级的变换变式变式的变式的变式的变式的变式 。