Nearest neighbor-based methods are commonly used for classification tasks and as subroutines of other data-analysis methods. An attacker with the capability of inserting their own data points into the training set can manipulate the inferred nearest neighbor structure. We distill this goal to the task of performing a training-set data insertion attack against $k$-Nearest Neighbor classification ($k$NN). We prove that computing an optimal training-time (a.k.a. poisoning) attack against $k$NN classification is NP-Hard, even when $k = 1$ and the attacker can insert only a single data point. We provide an anytime algorithm to perform such an attack, and a greedy algorithm for general $k$ and attacker budget. We provide theoretical bounds and empirically demonstrate the effectiveness and practicality of our methods on synthetic and real-world datasets. Empirically, we find that $k$NN is vulnerable in practice and that dimensionality reduction is an effective defense. We conclude with a discussion of open problems illuminated by our analysis.
翻译:以近邻为基础的方法通常用于分类任务,并作为其他数据分析方法的子例程。有能力将自己的数据点插入训练组的进攻者可以操纵推断的近邻结构。我们将这个目标用于执行训练数据集的插入攻击任务,针对近邻最远的邻国分类(k$NNN美元)。我们证明,计算最佳培训时间(a.k.a.中毒)对美元NNN的进攻是NP-Hard,即使美元=1美元,攻击者只能插入一个单一的数据点。我们提供了进行这种攻击的随时算法,并为一般的美元和攻击者预算提供了贪婪的算法。我们提供了理论约束和实验性地展示了我们在合成和现实世界数据集上的方法的有效性和实用性。我们发现美元NNNN在实际操作中是脆弱的,而减少维度是一种有效的防御。我们最后讨论了我们的分析所揭示的公开问题。