Homomorphic encryption (HE) enables calculating on encrypted data, which makes it possible to perform privacypreserving neural network inference. One disadvantage of this technique is that it is several orders of magnitudes slower than calculation on unencrypted data. Neural networks are commonly trained using floating-point, while most homomorphic encryption libraries calculate on integers, thus requiring a quantisation of the neural network. A straightforward approach would be to quantise to large integer sizes (e.g. 32 bit) to avoid large quantisation errors. In this work, we reduce the integer sizes of the networks, using quantisation-aware training, to allow more efficient computations. For the targeted MNIST architecture proposed by Badawi et al., we reduce the integer sizes by 33% without significant loss of accuracy, while for the CIFAR architecture, we can reduce the integer sizes by 43%. Implementing the resulting networks under the BFV homomorphic encryption scheme using SEAL, we could reduce the execution time of an MNIST neural network by 80% and by 40% for a CIFAR neural network.
翻译:同态加密(HE)使得在加密数据上进行计算成为可能,从而实现隐私保护的神经网络推断。这种技术的一个缺点是它比在非加密数据上计算要慢数个数量级。神经网络通常使用浮点数进行训练,而大多数同态加密库使用整数进行计算,因此需要对神经网络进行量化。一种简单的方法是量化到大的整数尺寸(例如32位)以避免大的量化误差。在这项工作中,我们使用量化感知训练降低网络的整数大小,以便进行更有效的计算。针对Badawi等人提出的MNIST架构,我们将整数尺寸缩小了33%,而精度损失不大,而对于CIFAR架构,我们可以将整数尺寸缩小43%。使用SEAL实现结果网络下的BFV同态加密方案,我们可以将MNIST神经网络的执行时间减少80%,对于CIFAR神经网络则减少了40%。