Extracting the architecture of layers of a given deep neural network (DNN) through hardware-based side channels allows adversaries to steal its intellectual property and even launch powerful adversarial attacks on the target system. In this work, we propose DNN-Alias, an obfuscation method for DNNs that forces all the layers in a given network to have similar execution traces, preventing attack models from differentiating between the layers. Towards this, DNN-Alias performs various layer-obfuscation operations, e.g., layer branching, layer deepening, etc, to alter the run-time traces while maintaining the functionality. DNN-Alias deploys an evolutionary algorithm to find the best combination of obfuscation operations in terms of maximizing the security level while maintaining a user-provided latency overhead budget. We demonstrate the effectiveness of our DNN-Alias technique by obfuscating the architecture of 700 randomly generated and obfuscated DNNs running on multiple Nvidia RTX 2080 TI GPU-based machines. Our experiments show that state-of-the-art side-channel architecture stealing attacks cannot extract the original DNN accurately. Moreover, we obfuscate the architecture of various DNNs, such as the VGG-11, VGG-13, ResNet-20, and ResNet-32 networks. Training the DNNs using the standard CIFAR10 dataset, we show that our DNN-Alias maintains the functionality of the original DNNs by preserving the original inference accuracy. Further, the experiments highlight that adversarial attack on obfuscated DNNs is unsuccessful.
翻译:通过基于硬件的侧渠道提取某个深神经网络的层结构(DNNN), 使得对手能够窃取其知识产权, 甚至对目标系统发起强大的对抗性攻击。 在这项工作中, 我们提议DNN- Alias, 这是DNN- Alias 的模糊方法, 使 DNN- Alias 的层层都具有类似的执行痕迹, 防止攻击模型在层之间有所区别 。 在此过程中, DNNN- Alias 执行各种层分解操作, 例如, 层分解、 层分解、 层分解等, 来改变运行时间的痕迹, 同时维护功能 。 DNNW- Alias 使用进化算算算算法来找到在最大程度安全水平方面最模糊操作的最佳组合。 我们的DNFAR- DNFAR 技术通过模糊结构来显示700 随机生成的 DNNIS 结构的有效性, 并模糊 DNNIS 运行多个 ND- RTX 2080 TI TI 以运行的运行的运行的运行中运行中运行的运行的运行的运行的运行时时时段。 我们的初始- ND- ND- GNFA 运行的初始的D- GNFA 的D- NBA 的D- gNBA 的运行中, 我们的初始结构不能进行中, 。</s>