Generating adversarial examples is the art of creating a noise that is added to an input signal of a classifying neural network, and thus changing the network's classification, while keeping the noise as tenuous as possible. While the subject is well-researched in the 2D regime, it is lagging behind in the 3D regime, i.e. attacking a classifying network that works on 3D point-clouds or meshes and, for example, classifies the pose of people's 3D scans. As of now, the vast majority of papers that describe adversarial attacks in this regime work by methods of optimization. In this technical report we suggest a neural network that generates the attacks. This network utilizes PointNet's architecture with some alterations. While the previous articles on which we based our work on have to optimize each shape separately, i.e. tailor an attack from scratch for each individual input without any learning, we attempt to create a unified model that can deduce the needed adversarial example with a single forward run.
翻译:生成对抗性实例是创造噪音的艺术,这种噪音被添加到神经网络分类的输入信号中,从而改变网络的分类,同时尽可能保持噪音的微弱性。虽然该主题在2D制度下得到了充分的研究,但在3D制度中却落后于3D制度,即攻击一个对3D点球或模贝起作用的分类网络,例如,对人们3D扫描的外形进行分类。到目前为止,绝大多数描述这个制度工作中对抗性攻击的文件都是用优化的方法来描述的。在这个技术报告中,我们建议建立一个产生攻击的神经网络。这个网络利用点网的结构进行一些修改。虽然我们据以开展工作的前几个文章必须分别优化每个形状,即从零到零地对每项输入进行攻击而不加任何学习,但我们试图创建一个统一的模型,能够用单一的前期运行来推断所需的对抗性攻击实例。