In this paper, we propose a novel design, called MixNN, for protecting deep learning model structure and parameters. The layers in a deep learning model of MixNN are fully decentralized. It hides communication address, layer parameters and operations, and forward as well as backward message flows among non-adjacent layers using the ideas from mix networks. MixNN has following advantages: 1) an adversary cannot fully control all layers of a model including the structure and parameters, 2) even some layers may collude but they cannot tamper with other honest layers, 3) model privacy is preserved in the training phase. We provide detailed descriptions for deployment. In one classification experiment, we compared a neural network deployed in a virtual machine with the same one using the MixNN design on the AWS EC2. The result shows that our MixNN retains less than 0.001 difference in terms of classification accuracy, while the whole running time of MixNN is about 7.5 times slower than the one running on a single virtual machine.
翻译:在本文中,我们提出一个名为 MixNNN 的新设计,用于保护深层学习模型结构和参数。MixNNN 深层学习模型中的层层完全分散。它隐藏通信地址、层参数和操作,以及利用混合网络的理念在非邻接层之间向前和向后传递信息。 MixNNN 有以下优点:1) 对手无法完全控制包括结构和参数在内的模型的所有层层,2) 甚至一些层可能相互串通,但它们无法与其他诚实层发生抵触;3) 模型隐私在培训阶段得到保存。我们提供了部署的详细描述。在一次分类实验中,我们比较了在虚拟机器上部署的神经网络,而使用AWS EC2 上的MixNN 设计,我们将一个虚拟机器上的神经网络与同一个网络进行比较。结果显示,我们的MixNN在分类准确性方面保留了不到0.001的差异,而MixN 的整个运行时间比一个虚拟机器的运行时间慢约7.5倍于7.5倍。