Hybrid private inference (PI) protocol, which synergistically utilizes both multi-party computation (MPC) and homomorphic encryption, is one of the most prominent techniques for PI. However, even the state-of-the-art PI protocols are bottlenecked by the non-linear layers, especially the activation functions. Although a standard non-linear activation function can generate higher model accuracy, it must be processed via a costly garbled-circuit MPC primitive. A polynomial activation can be processed via Beaver's multiplication triples MPC primitive but has been incurring severe accuracy drops so far. In this paper, we propose an accuracy preserving low-degree polynomial activation function (AESPA) that exploits the Hermite expansion of the ReLU and basis-wise normalization. We apply AESPA to popular ML models, such as VGGNet, ResNet, and pre-activation ResNet, to show an inference accuracy comparable to those of the standard models with ReLU activation, achieving superior accuracy over prior low-degree polynomial studies. When applied to the all-RELU baseline on the state-of-the-art Delphi PI protocol, AESPA shows up to 42.1x and 28.3x lower online latency and communication cost.
翻译:协同使用多方计算(MPC)和同质加密的私人混合(PI)协议是PI最突出的技术之一。 但是,即使是最先进的PI协议也受到非线性层的瓶颈,特别是激活功能。虽然标准的非线性激活功能可以产生更高的模型精度,但必须通过昂贵的混合电路 MPC 原始程序进行处理。一个多边激活可以通过Beaver的倍增三倍增(MPC)原始程序进行处理,但一直经历严重精度下降。在本文中,我们建议精确保存低度多价激活功能(AESPA),利用RELU的赫米特扩展和基础性正常化功能。我们将AESPA应用到流行的ML模型,如VGNet、ResNet和激活前反应网络,以显示与ReLU激活标准模型的准确性,在前低度多级多价性聚点激活上达到优精度的精度下降。