The widespread adoption of Machine Learning as a Service raises critical privacy and security concerns, particularly about data confidentiality and trust in both cloud providers and the machine learning models. Homomorphic Encryption (HE) has emerged as a promising solution to this problems, allowing computations on encrypted data without decryption. Despite its potential, existing approaches to integrate HE into neural networks are often limited to specific architectures, leaving a wide gap in providing a framework for easy development of HE-friendly privacy-preserving neural network models similar to what we have in the broader field of machine learning. In this paper, we present FHEON, a configurable framework for developing privacy-preserving convolutional neural network (CNN) models for inference using HE. FHEON introduces optimized and configurable implementations of privacy-preserving CNN layers including convolutional layers, average pooling layers, ReLU activation functions, and fully connected layers. These layers are configured using parameters like input channels, output channels, kernel size, stride, and padding to support arbitrary CNN architectures. We assess the performance of FHEON using several CNN architectures, including LeNet-5, VGG-11, VGG- 16, ResNet-20, and ResNet-34. FHEON maintains encrypted-domain accuracies within +/- 1% of their plaintext counterparts for ResNet-20 and LeNet-5 models. Notably, on a consumer-grade CPU, the models build on FHEON achieved 98.5% accuracy with a latency of 13 seconds on MNIST using LeNet-5, and 92.2% accuracy with a latency of 403 seconds on CIFAR-10 using ResNet-20. Additionally, FHEON operates within a practical memory budget requiring not more than 42.3 GB for VGG-16.
翻译:暂无翻译