With the tremendous advances in the architecture and scale of convolutional neural networks (CNNs) over the past few decades, they can easily reach or even exceed the performance of humans in certain tasks. However, a recently discovered shortcoming of CNNs is that they are vulnerable to adversarial attacks. Although the adversarial robustness of CNNs can be improved by adversarial training, there is a trade-off between standard accuracy and adversarial robustness. From the neural architecture perspective, this paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy. Under a minimal computational overhead, the introduction of a dilation architecture is expected to be friendly with the standard performance of the backbone CNN while pursuing adversarial robustness. Theoretical analyses on the standard and adversarial error bounds naturally motivate the proposed neural architecture dilation algorithm. Experimental results on real-world datasets and benchmark neural networks demonstrate the effectiveness of the proposed algorithm to balance the accuracy and adversarial robustness.
翻译:在过去几十年里,随着神经神经网络(CNNs)的架构和规模的巨大进步,这些网络很容易达到甚至超过人类在某些任务方面的表现;然而,最近发现的CNN的缺点是,它们容易受到对抗性攻击的伤害。尽管CNN的对抗性强力可以通过对抗性培训加以改善,但标准准确性和对抗性强力之间存在着权衡。从神经结构的角度来看,本文件的目的是改进骨干CNN的对抗性强力,这种强力具有令人满意的准确性。在最低限度的计算间接费用下,引入一种配方结构可望与CNN的主干标准性表现友好,同时追求对抗性强力。关于标准和对抗性错误的理论分析自然地激励了拟议的神经结构的调差算法。关于现实世界数据集和基准神经网络的实验结果显示了拟议的算法在平衡准确性和对抗性强力性方面的有效性。