Convolutional neural networks (CNNs) are known for their good performance and generalization in vision-related tasks and have become state-of-the-art in both application and research-based domains. However, just like other neural network models, they suffer from a susceptibility to noise and adversarial attacks. An adversarial defence aims at reducing a neural network's susceptibility to adversarial attacks through learning or architectural modifications. We propose the weight map layer (WM) as a generic architectural addition to CNNs and show that it can increase their robustness to noise and adversarial attacks. We further explain that the enhanced robustness of the two WM variants results from the adaptive activation-variance amplification exhibited by the layer. We show that the WM layer can be integrated into scaled up models to increase their noise and adversarial attack robustness, while achieving comparable accuracy levels across different datasets.
翻译:动态神经网络(CNNs)以在与视觉有关的任务中表现良好和普遍化而著称,已经成为应用领域和研究领域的最新技术,然而,与其他神经网络模型一样,它们也容易受到噪音和对抗性攻击的影响。对抗性防御旨在通过学习或建筑改造,减少神经网络易受对抗性攻击的可能性。我们提议将重量图层(WM)作为CNN的通用建筑添加部分,并表明它可以增强它们对于噪音和对抗性攻击的坚固性。我们进一步解释说,两种WM变异体的强大性是该层显示的适应性活性变化放大作用的结果。我们表明,WM层可以纳入扩大的模型,以提高它们的噪音和对抗性攻击的坚固性,同时在不同数据集之间达到可比的精确度。