Training an ensemble of different sub-models has empirically proven to be an effective strategy to improve deep neural networks' adversarial robustness. Current ensemble training methods for image recognition usually encode the image labels by one-hot vectors, which neglect dependency relationships between the labels. Here we propose a novel adversarial ensemble training approach to jointly learn the label dependencies and the member models. Our approach adaptively exploits the learned label dependencies to promote the diversity of the member models. We test our approach on widely used datasets MNIST, FasionMNIST, and CIFAR-10. Results show that our approach is more robust against black-box attacks compared with the state-of-the-art methods. Our code is available at https://github.com/ZJLAB-AMMI/LSD.
翻译:从经验上证明,培训不同小模范的组合是改进深神经网络对抗性强力的有效战略。当前关于图像识别的混合培训方法通常用单热矢量编码图像标签,这些矢量忽略了标签之间的依赖关系。我们在这里建议采用新型的对抗混合培训方法,共同学习标签依赖性和成员模式。我们的方法是适应性地利用已学的标签依赖性来促进成员模式的多样性。我们测试了我们广泛使用的MNIST、FasionMNIST和CIFAR-10数据集的方法。结果显示,我们的方法比最先进的方法更能对付黑盒攻击。我们的代码可在https://github.com/ZJLAB-AMMI/LSD上查阅。