Unlike traditional static deep neural networks (DNNs), dynamic neural networks (NNs) adjust their structures or parameters to different inputs to guarantee accuracy and computational efficiency. Meanwhile, it has been an emerging research area in deep learning recently. Although traditional static DNNs are vulnerable to the membership inference attack (MIA) , which aims to infer whether a particular point was used to train the model, little is known about how such an attack performs on the dynamic NNs. In this paper, we propose a novel MI attack against dynamic NNs, leveraging the unique policy networks mechanism of dynamic NNs to increase the effectiveness of membership inference. We conducted extensive experiments using two dynamic NNs, i.e., GaterNet, BlockDrop, on four mainstream image classification tasks, i.e., CIFAR-10, CIFAR-100, STL-10, and GTSRB. The evaluation results demonstrate that the control-flow information can significantly promote the MIA. Based on backbone-finetuning and information-fusion, our method achieves better results than baseline attack and traditional attack using intermediate information.
翻译:与传统的静态深电神经网络不同,动态神经网络(NNS)调整其结构或参数以适应不同的投入,以保证准确性和计算效率。与此同时,这是最近深层学习的一个新兴研究领域。虽然传统的静态DNNS容易受到会员推论攻击(MIA)的伤害,其目的是推断是否使用某一点来训练模型,但对于这种攻击如何对动态NNS进行的情况却知之甚少。在本文中,我们提议对动态NNPS进行新的MI攻击,利用动态NNS的独特政策网络机制来提高成员判断的效力。我们用两个动态NNNNS(即GaterNet、BlockDrop)进行了广泛的实验,在四种主流图像分类任务上,即CIFAR-10、CIFAR-100、STL-10和GTSRB。评价结果表明,控制流量信息可以大大促进MIA。基于主干线调整和信息传输,我们的方法比使用中间信息进行基线攻击和传统攻击取得更好的结果。