With the thriving of deep learning in processing point cloud data, recent works show that backdoor attacks pose a severe security threat to 3D vision applications. The attacker injects the backdoor into the 3D model by poisoning a few training samples with trigger, such that the backdoored model performs well on clean samples but behaves maliciously when the trigger pattern appears. Existing attacks often insert some additional points into the point cloud as the trigger, or utilize a linear transformation (e.g., rotation) to construct the poisoned point cloud. However, the effects of these poisoned samples are likely to be weakened or even eliminated by some commonly used pre-processing techniques for 3D point cloud, e.g., outlier removal or rotation augmentation. In this paper, we propose a novel imperceptible and robust backdoor attack (IRBA) to tackle this challenge. We utilize a nonlinear and local transformation, called weighted local transformation (WLT), to construct poisoned samples with unique transformations. As there are several hyper-parameters and randomness in WLT, it is difficult to produce two similar transformations. Consequently, poisoned samples with unique transformations are likely to be resistant to aforementioned pre-processing techniques. Besides, as the controllability and smoothness of the distortion caused by a fixed WLT, the generated poisoned samples are also imperceptible to human inspection. Extensive experiments on three benchmark datasets and four models show that IRBA achieves 80%+ ASR in most cases even with pre-processing techniques, which is significantly higher than previous state-of-the-art attacks.
翻译:随着在处理点云数据过程中深层学习的蓬勃发展,最近的工程表明,后门攻击对3D视觉应用构成严重的安全威胁。攻击者通过用触发器毒害一些训练样品,将后门注入3D模型,使后门模型在干净的样品上表现良好,但在触发模式出现时却有恶意行为。现有的攻击往往在点云中插入一些额外的点,作为触发器,或利用线性转换(如轮用)来构造有毒点云。然而,这些有毒样品的影响可能受到一些常用的3D点云前处理技术的削弱甚至消除。例如,超值清除或旋转增强等,从而将后门输入到3D模型中。在本文件中,我们提出一种新的不易感知和强势的后门攻击(IRBA)来应对这一挑战。我们利用非线性和本地变换,称为加权本地变换(WLT)来制造有毒的样品,在WLT中,甚至难以产生两种类似的变异。因此,具有最独特的变异性样品的样品和最难易变的RB的样品在前的变压性试验中 。