Training an ensemble of different sub-models has empirically proven to be an effective strategy to improve deep neural networks' adversarial robustness. Current ensemble training methods for image recognition usually encode the image labels by one-hot vectors, which neglect dependency relationships between the labels. Here we propose a novel adversarial training approach that learns the conditional dependencies between labels and the model ensemble jointly. We test our approach on widely used datasets MNIST, FasionMNIST and CIFAR-10. Results show that our approach is more robust against black-box attacks compared with state-of-the-art methods. Our code is available at https://github.com/ZJLAB-AMMI/LSD.
翻译:从经验上证明,培训不同小模型的组合是改进深神经网络对抗性强力的有效战略。当前图像识别的混合培训方法通常用单热矢量编码图像标签,忽视标签之间的依赖关系。我们在此提议一种新的对抗性培训方法,学习标签和模型组合之间的有条件依赖关系。我们测试了我们广泛使用的MNIST、FasionMNIST和CIFAR-10数据集。结果显示,我们的方法比最先进的方法更能对付黑盒袭击。我们的代码可在https://github.com/ZJLAB-AMMI/LSD上查阅。