Private Inference (PI) uses cryptographic primitives to perform privacy preserving machine learning. In this setting, the owner of the network runs inference on the data of the client without learning anything about the data and without revealing any information about the model. It has been observed that a major computational bottleneck of PI is the calculation of the gate (i.e., ReLU), so a considerable amount of effort have been devoted to reducing the number of ReLUs in a given network. We focus on the DReLU, which is the non-linear step function of the ReLU and show that one DReLU can serve many ReLU operations. We suggest a new activation module where the DReLU operation is only performed on a subset of the channels (Prototype channels), while the rest of the channels (replicate channels) replicates the DReLU of each of their neurons from the corresponding neurons in one of the prototype channels. We then extend this idea to work across different layers. We show that this formulation can drastically reduce the number of DReLU operations in resnet type network. Furthermore, our theoretical analysis shows that this new formulation can solve an extended version of the XOR problem, using just one non-linearity and two neurons, something that traditional formulations and some PI specific methods cannot achieve. We achieve new SOTA results on several classification setups, and achieve SOTA results on image segmentation.


翻译:隐私推理(PI)利用密码学原语实现隐私保护的机器学习。在此设定下,网络所有者可在不获取客户端数据任何信息且不泄露模型任何信息的前提下,对客户端数据进行推理。研究表明,PI的主要计算瓶颈在于门运算(即ReLU)的计算,因此已有大量研究致力于减少给定网络中ReLU的数量。本文聚焦于DReLU(即ReLU的非线性阶跃函数),并证明单个DReLU可服务于多个ReLU运算。我们提出一种新的激活模块,其中DReLU运算仅在一部分通道(原型通道)上执行,而其余通道(复制通道)则从其对应的原型通道神经元中复制DReLU结果。随后,我们将这一思想扩展至跨层应用。实验表明,该方案能显著降低resnet类网络中的DReLU运算量。此外,理论分析证明,新方案仅需一个非线性单元和两个神经元即可解决扩展版XOR问题——这是传统方案及部分PI专用方法无法实现的。我们在多个分类任务中取得了新的最优性能,并在图像分割任务中达到当前最佳水平。

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员