The proliferation of deep learning (DL) has led to the emergence of privacy and security concerns. To address these issues, secure Two-party computation (2PC) has been proposed as a means of enabling privacy-preserving DL computation. However, in practice, 2PC methods often incur high computation and communication overhead, which can impede their use in large-scale systems. To address this challenge, we introduce RRNet, a systematic framework that aims to jointly reduce the overhead of MPC comparison protocols and accelerate computation through hardware acceleration. Our approach integrates the hardware latency of cryptographic building blocks into the DNN loss function, resulting in improved energy efficiency, accuracy, and security guarantees. Furthermore, we propose a cryptographic hardware scheduler and corresponding performance model for Field Programmable Gate Arrays (FPGAs) to further enhance the efficiency of our framework. Experiments show RRNet achieved a much higher ReLU reduction performance than all SOTA works on CIFAR-10 dataset.
翻译:为解决这些问题,建议采用双方安全计算(2PC),作为方便隐私保存DL计算的手段,但在实践中,2PC方法往往产生高计算和通信间接费用,可能妨碍其在大型系统中的使用。为了应对这一挑战,我们引入了RRNet,这是一个系统化框架,目的是共同减少MPC比较协议的间接费用,并通过加速硬件加速计算。我们的方法将加密构件的硬件长度纳入DNN损失功能,从而提高能源效率、准确性和安全保障。此外,我们提议为外地可编程门阵列(FPGAs)提供加密硬件排程器和相应的性能模型,以进一步提高我们框架的效率。实验显示RRNet比CFAR-10数据集的所有SOTA工作都提高了RELU的减少性能。