It is increasingly important to enable privacy-preserving inference for cloud services based on Transformers. Post-quantum cryptographic techniques, e.g., fully homomorphic encryption (FHE), and multi-party computation (MPC), are popular methods to support private Transformer inference. However, existing works still suffer from prohibitively computational and communicational overhead. In this work, we present, Primer, to enable a fast and accurate Transformer over encrypted data for natural language processing tasks. In particular, Primer is constructed by a hybrid cryptographic protocol optimized for attention-based Transformer models, as well as techniques including computation merge and tokens-first ciphertext packing. Comprehensive experiments on encrypted language modeling show that Primer achieves state-of-the-art accuracy and reduces the inference latency by 90.6% ~ 97.5% over previous methods.
翻译:在基于Transformer的云服务中实现隐私保护推理变得越来越重要。后量子密码技术,例如全同态加密(FHE)和多方计算(MPC),是支持私有Transformer推理的流行方法。然而,现有的作品仍然受到计算和通信开销的严重影响。在本文中,我们提出了Primer,一种能够为自然语言处理任务提供加密数据上快速准确的Transformer推理的算法。特别地,Primer是由面向注意力的Transformer模型优化的混合密码协议构成的,以及包括计算合并和首次插入加密文本的技术在内的特定技术。对加密语言建模的全面实验表明,Primer的准确性达到了最先进水平,并且比以前的方法缩短了推理延迟90.6%〜97.5%。