K-Nearest Neighbor (kNN)-based deep learning methods have been applied to many applications due to their simplicity and geometric interpretability. However, the robustness of kNN-based classification models has not been thoroughly explored and kNN attack strategies are underdeveloped. In this paper, we propose an Adversarial Soft kNN (ASK) loss to both design more effective kNN attack strategies and to develop better defenses against them. Our ASK loss approach has two advantages. First, ASK loss can better approximate the kNN's probability of classification error than objectives proposed in previous works. Second, the ASK loss is interpretable: it preserves the mutual information between the perturbed input and the kNN of the unperturbed input. We use the ASK loss to generate a novel attack method called the ASK-Attack (ASK-Atk), which shows superior attack efficiency and accuracy degradation relative to previous kNN attacks. Based on the ASK-Atk, we then derive an ASK-Defense (ASK-Def) method that optimizes the worst-case training loss induced by ASK-Atk.
翻译:K- Nearest Neearbor (kNN) 基于 K- Neearbor (kNN) 的深层次学习方法因其简单和几何解释性而应用到许多应用中。然而,基于 kNN 的分类模型的稳健性尚未得到彻底探讨,而 kNN 攻击战略则不发达。在本文中,我们建议对设计更有效的 kNNN 攻击战略并发展更好的防御方法进行反向软软K kNN (ASK) 损失。我们的ASK损失方法有两个优点。首先, ASK损失可以比先前工程中提出的目标更接近 kNN 分类错误的概率。 其次, ASK 损失是可以解释的: 它保存了在被渗透的投入和未被渗透的投入的 kNN 之间的相互信息。 我们使用 ASK 损失来生成一种称为 ASK- Attack (ASK- Atk) 的新型攻击方法, 这种方法显示与先前的 kNN 攻击相比, 攻击的效率和准确性退化。基于 ASK- Atk, 我们然后得出一个ASK- Def ASK- Def 方法, 优化由ASK- ASK- Defeforst- desk 培训导致的最坏情况损失。