The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices. To address the challenge, we envision that the weight pruning technique will help DNNs against MIA while reducing model storage and computational operation. In this work, we propose a pruning algorithm, and we show that the proposed algorithm can find a subnetwork that can prevent privacy leakage from MIA and achieves competitive accuracy with the original DNNs. We also verify our theoretical insights with experiments. Our experimental results illustrate that the attack accuracy using model compression is up to 13.6% and 10% lower than that of the baseline and Min-Max game, accordingly.
翻译:大型模型规模、高计算操作和对会员推断攻击的脆弱性阻碍了深层学习或深神经网络的受欢迎程度,特别是在移动设备上。为了应对这一挑战,我们设想加权计算技术将有助于DNN针对MIA进行测试,同时减少模型储存和计算操作。在这项工作中,我们建议了一种裁剪算法,并表明提议的算法可以找到一个子网络,防止MIA的隐私泄漏,并实现与原始 DNN的竞争性精确性。我们还通过实验来核查我们的理论洞察力。我们的实验结果表明,使用模型压缩的进攻精确度比基线和Min-Max游戏低13.6%和10%。