We propose AriaNN, a low-interaction privacy-preserving framework for private neural network training and inference on sensitive data. Our semi-honest 2-party computation protocol leverages function secret sharing, a recent lightweight cryptographic protocol that allows us to achieve an efficient online phase. We design optimized primitives for the building blocks of neural networks such as ReLU, MaxPool and BatchNorm. For instance, we perform private comparison for ReLU operations with a single message of the size of the input during the online phase, and with preprocessing keys close to 4X smaller than previous work. Last, we propose an extension to support n-party private federated learning. We implement our framework as an extensible system on top of PyTorch that leverages CPU and GPU hardware acceleration for cryptographic and machine learning operations. We evaluate our end-to-end system for private inference and training on standard neural networks such as AlexNet, VGG16 or ResNet18 between distant servers. We show that computation rather than communication is the main bottleneck and that using GPUs together with reduced key size is a promising solution to overcome this barrier.
翻译:我们提议AriANN, 用于私人神经网络的低互动隐私保护框架, 用于私人神经网络培训和敏感数据的推断。 我们的半诚实的2党计算协议利用秘密分享功能, 这是最近一项使我们得以实现高效在线阶段的轻量加密协议。 我们为神经网络的构件设计了优化原始元素, 如 ReLU、 MaxPool 和 BatchNorm 。 例如, 我们用在线阶段输入大小的单一信息对ReLU 操作进行私人比较, 并且预处理键比先前的工作小4X 。 最后, 我们提议扩展一个支持n Party私人联合学习的功能。 我们实施我们的框架, 在PyTorrch 顶端将 CPU 和 GPU 硬件加速用于加密和机器学习操作。 我们评价我们的私人推断和培训端对端系统, 如 远程服务器之间的输入大小, VGGG16 或ResNet18 等标准神经网络。 我们显示, 计算而不是通信是主要的瓶颈, 并且使用GPUS 与降低关键大小的屏障共同克服这一障碍。