Recent works have theoretically and empirically shown that deep neural networks (DNNs) have an inherent vulnerability to small perturbations. Applying the Deep k-Nearest Neighbors (DkNN) classifier, we observe a dramatically increasing robustness-accuracy trade-off as the layer goes deeper. In this work, we propose a Deep Adversarially-Enhanced k-Nearest Neighbors (DAEkNN) method which achieves higher robustness than DkNN and mitigates the robustness-accuracy trade-off in deep layers through two key elements. First, DAEkNN is based on an adversarially trained model. Second, DAEkNN makes predictions by leveraging a weighted combination of benign and adversarial training data. Empirically, we find that DAEkNN improves both the robustness and the robustness-accuracy trade-off on MNIST and CIFAR-10 datasets.
翻译:最近的工作在理论上和经验上都表明,深神经网络(DNN)具有对小扰动的内在脆弱性。 应用深K- 最近邻( DkNN) 分类器( DkNN), 我们观察到随着层层的更深,强力- 准确性交易量急剧增加。 在这项工作中, 我们建议采用深反向增强K- 最近邻( DAEKNN) 方法, 该方法比 DkNN( DAEKN) 更强大, 并通过两个关键要素减轻深海层的强力- 准确性交易量。 首先, DAEKNN 以对抗性训练模型为基础。 其次, DAEKNN 利用良性和对抗性培训数据的加权组合做出预测。 偶然的是, 我们发现DAEKNN( DAEKNN) 改善了MNIST 和CIFAR- 10 数据集的强性- 准确性交易量性交易量。