Pruning is a well-known mechanism for reducing the computational cost of deep convolutional networks. However, studies have shown the potential of pruning as a form of regularization, which reduces overfitting and improves generalization. We demonstrate that this family of strategies provides additional benefits beyond computational performance and generalization. Our analyses reveal that pruning structures (filters and/or layers) from convolutional networks increase not only generalization but also robustness to adversarial images (natural images with content modified). Such achievements are possible since pruning reduces network capacity and provides regularization, which have been proven effective tools against adversarial images. In contrast to promising defense mechanisms that require training with adversarial images and careful regularization, we show that pruning obtains competitive results considering only natural images (e.g., the standard and low-cost training). We confirm these findings on several adversarial attacks and architectures; thus suggesting the potential of pruning as a novel defense mechanism against adversarial images.
翻译:普鲁宁是降低深层革命网络计算成本的著名机制,然而,研究显示,作为正规化的一种形式,裁剪的潜力是潜在的,这可以减少过度适应和改进一般化。我们证明,这一系列战略除了计算性能和一般化之外,还带来额外的好处。我们的分析表明,从复杂网络中排剪的结构(过滤器和/或层)不仅会增加一般化,而且会增强对立图像的稳健性(自然图像内容被修改 ) 。这些成就是可能的,因为裁剪会降低网络能力,提供正规化的有效工具,已被证明是对抗敌对图像的有效工具。与需要用对抗性图像和谨慎正规化培训的有希望的防御机制相反,我们显示,单考虑自然图像(例如标准和低成本培训)就能取得竞争性的结果。我们证实了关于一些对抗性攻击和结构的这些结论,从而提出了作为对付对抗性图像的新防御机制的可能性。