A polygonal mesh is the most-commonly used representation of surfaces in computer graphics; thus, a variety of classification networks have been recently proposed. However, while adversarial attacks are wildly researched in 2D, almost no works on adversarial meshes exist. This paper proposes a novel, unified, and general adversarial attack, which leads to misclassification of numerous state-of-the-art mesh classification neural networks. Our attack approach is black-box, i.e. it has access only to the network's predictions, but not to the network's full architecture or gradients. The key idea is to train a network to imitate a given classification network. This is done by utilizing random walks along the mesh surface, which gather geometric information. These walks provide insight onto the regions of the mesh that are important for the correct prediction of the given classification network. These mesh regions are then modified more than other regions in order to attack the network in a manner that is barely visible to the naked eye.
翻译:多角网格是计算机图形中最常用的表面表示方式; 因此, 最近提出了各种分类网络。 但是, 虽然对抗性攻击在 2D 中进行了狂野的研究, 但几乎没有关于对抗性网格的作品。 本文提出一种新颖的、 统一和一般的对抗性攻击, 导致大量最先进的网格分类神经网络的分类错误。 我们的攻击方法是黑箱, 即它只能接触网络的预测, 而不是网络的完整结构或梯度。 关键的想法是训练一个网络来模仿特定的分类网络。 这是通过在网格表面进行随机行走, 收集几何信息。 这些行走可以洞察到网格中对于正确预测给定的分类网络很重要的区域。 这些网格区域随后比其他区域更修改更多, 以便以对赤眼看不到的方式攻击网络。