Outsourcing neural network inference tasks to an untrusted cloud raises data privacy and integrity concerns. To address these challenges, several privacy-preserving and verifiable inference techniques have been proposed based on replacing the non-polynomial activation functions such as the rectified linear unit (ReLU) function with polynomial activation functions. Such techniques usually require polynomials with integer coefficients or polynomials over finite fields. Motivated by such requirements, several works proposed replacing the ReLU activation function with the square activation function. In this work, we empirically show that the square function is not the best degree-$2$ polynomial that can replace the ReLU function even when restricting the polynomials to have integer coefficients. We instead propose a degree-$2$ polynomial activation function with a first order term and empirically show that it can lead to much better models. Our experiments on the CIFAR-$10$ and CIFAR-$100$ datasets on various architectures show that our proposed activation function improves the test accuracy by up to $9.4\%$ compared to the square function.
翻译:将神经网络的推断任务外包给一个不受信任的云层会引起数据隐私和完整性问题。为了应对这些挑战,在用多元激活功能取代非球状激活功能,如纠正线性单位功能(ReLU)的基础上,提出了若干保护隐私和可核查的推断技术。这类技术通常需要具备数系数或对有限字段的多元值的多元值。受这些要求的驱使,一些工程提议用方形激活功能取代ReLU激活功能。在这项工作中,我们从经验上表明,平方函数并不是最佳的度-2美元多元值,即使在限制多数值具有整数系数时,也可以取代RELU功能。我们建议用第一个顺序和实验性地表明,它可以导致更好的模型。我们在CFAR-1万美元和CIFAR-100美元的各种结构数据集上进行的实验表明,我们提议的激活功能提高了测试的精确度,比方形函数提高了9.4美元。