The query-based black-box attacks have raised serious threats to machine learning models in many real applications. In this work, we study a lightweight defense method, dubbed Random Noise Defense (RND), which adds proper Gaussian noise to each query. We conduct the theoretical analysis about the effectiveness of RND against query-based black-box attacks and the corresponding adaptive attacks. Our theoretical results reveal that the defense performance of RND is determined by the magnitude ratio between the noise induced by RND and the noise added by the attackers for gradient estimation or local search. The large magnitude ratio leads to the stronger defense performance of RND, and it's also critical for mitigating adaptive attacks. Based on our analysis, we further propose to combine RND with a plausible Gaussian augmentation Fine-tuning (RND-GF). It enables RND to add larger noise to each query while maintaining the clean accuracy to obtain a better trade-off between clean accuracy and defense performance. Additionally, RND can be flexibly combined with the existing defense methods to further boost the adversarial robustness, such as adversarial training (AT). Extensive experiments on CIFAR-10 and ImageNet verify our theoretical findings and the effectiveness of RND and RND-GF.
翻译:以查询为基础的黑盒攻击在许多实际应用中给机器学习模式带来了严重威胁。 在这项工作中,我们研究了一种轻量级防御方法,即所谓的随机噪音防御(RND),它为每个查询增加了适当的高斯语噪音。我们对RND在基于查询的黑盒攻击和相应的适应性攻击方面的效力进行了理论分析。我们的理论结果表明,RND的防御性能取决于RND引起的噪音与攻击者为梯度估计或当地搜索而增加的噪音之间的规模比。巨大的比例导致RND的防御性能更强,而且它对减轻适应性攻击也至关重要。根据我们的分析,我们进一步提议将RND与合理的高斯语增强性精度调整(RND-GF)相结合。它使RND能够在每项查询中增加更大的噪音,同时保持干净的准确性,以便在干净的准确性和防御性之间取得更好的交换。此外,RND可以灵活地结合现有的防御方法,以进一步提高对抗性强性,例如对抗性训练(AT)。关于CIFAR-10的大规模实验和图像网核查我们的理论和结果。