Neural networks have demonstrated state-of-the-art performance in various machine learning fields. However, the introduction of malicious perturbations in input data, known as adversarial examples, has been shown to deceive neural network predictions. This poses potential risks for real-world applications such as autonomous driving and text identification. In order to mitigate these risks, a comprehensive understanding of the mechanisms underlying adversarial examples is essential. In this study, we demonstrate that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's incorrect prediction, in contrast to the widely held belief that human-unidentifiable characteristics play a critical role in fooling a network. This concept of human-recognizable characteristics enables us to explain key features of adversarial perturbations, including their existence, transferability among different neural networks, and increased interpretability for adversarial training. We also uncover two unique properties of adversarial perturbations that deceive neural networks: masking and generation. Additionally, a special class, the complementary class, is identified when neural networks classify input images. The presence of human-recognizable information in adversarial perturbations allows researchers to gain insight into the working principles of neural networks and may lead to the development of techniques for detecting and defending against adversarial attacks.
翻译:在各种机器学习领域,神经网络表现出了最先进的表现,然而,在输入数据中引入恶意扰动(称为对抗性实例)已经证明欺骗了神经网络预测,这给现实世界应用带来潜在风险,例如自主驱动和文本识别。为了减轻这些风险,必须全面了解对抗性实例背后的机制。在本研究中,我们证明对抗性扰动包含可识别的人类信息,这是造成神经网络错误预测的关键密谋者,与人们普遍认为人类特征不可辨特征在愚弄网络方面起着关键作用形成对照。这种人类可辨识特征的概念使我们能够解释对抗性干扰的主要特征,包括其存在、不同神经网络之间的可转移性,以及提高对抗性培训的可解释性。我们还发现,对抗性扰动神经网络的两种独特的特征:掩蔽和生成。此外,在将可辨识性特征网络的辅助类中,在对可辨识性攻击性图像进行分解时,可识别性研究人员的可辨识性定位技术。