Graph Laplacian (GL)-based semi-supervised learning is one of the most used approaches for classifying nodes in a graph. Understanding and certifying the adversarial robustness of machine learning (ML) algorithms has attracted large amounts of attention from different research communities due to its crucial importance in many security-critical applied domains. There is great interest in the theoretical certification of adversarial robustness for popular ML algorithms. In this paper, we provide the first adversarial robust certification for the GL classifier. More precisely we quantitatively bound the difference in the classification accuracy of the GL classifier before and after an adversarial attack. Numerically, we validate our theoretical certification results and show that leveraging existing adversarial defenses for the $k$-nearest neighbor classifier can remarkably improve the robustness of the GL classifier.
翻译:Laplacian(GL)图基于半监督的半监督学习是用图表对节点进行分类的最常用方法之一。 理解和验证机器学习算法的对抗性坚固性已经吸引了不同研究界的大量关注, 因为它在许多安全关键应用领域至关重要。 对于对流行的 ML 算法的对抗性强固性理论认证非常感兴趣。 在本文中, 我们为 GL 分类员提供了首个对抗性强的认证。 更准确地说, 我们在数量上限制了 GL 分类师在对抗性攻击前后的分类准确性差异。 从数字上看, 我们验证了我们的理论认证结果,并表明为最接近的邻居分类员利用现有的对抗性辩护可以显著改善GL 分类员的稳健性。