A Membership Inference Attack (MIA) assesses how much a target machine learning model reveals about its training data by determining whether specific query instances were part of the training set. State-of-the-art MIAs rely on training hundreds of shadow models that are independent of the target model, leading to significant computational overhead. In this paper, we introduce Imitative Membership Inference Attack (IMIA), which employs a novel imitative training technique to strategically construct a small number of target-informed imitative models that closely replicate the target model's behavior for inference. Extensive experimental results demonstrate that IMIA substantially outperforms existing MIAs in various attack settings while only requiring less than 5% of the computational cost of state-of-the-art approaches.


翻译:成员推理攻击(MIA)通过判断特定查询实例是否属于训练集,来评估目标机器学习模型对其训练数据的泄露程度。当前最先进的MIA方法依赖于训练数百个与目标模型无关的影子模型,导致显著的计算开销。本文提出模仿式成员推理攻击(IMIA),该方法采用新颖的模仿训练技术,策略性地构建少量目标感知的模仿模型,这些模型能紧密复现目标模型的行为以进行推理。大量实验结果表明,在各种攻击设置下,IMIA显著优于现有MIA方法,同时仅需最先进方法不足5%的计算成本。

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员