Deep learning networks have demonstrated high performance in a large variety of applications, such as image classification, speech recognition, and natural language processing. However, there exists a major vulnerability exploited by the use of adversarial attacks. An adversarial attack imputes images by altering the input image very slightly, making it nearly undetectable to the naked eye, but results in a very different classification by the network. This paper explores the projected gradient descent (PGD) attack and the Adaptive Mask Segmentation Attack (ASMA) on the image segmentation DeepLabV3 model using two types of architectures: MobileNetV3 and ResNet50, It was found that PGD was very consistent in changing the segmentation to be its target while the generalization of ASMA to a multiclass target was not as effective. The existence of such attack however puts all of image classification deep learning networks in danger of exploitation.
翻译:深层学习网络在图像分类、语音识别和自然语言处理等多种应用中表现出很高的性能;然而,由于使用对抗性攻击手段,存在着一种主要的脆弱性;对抗性攻击通过对输入图像进行微小的修改,对图像进行估算,使输入图像几乎无法被肉眼探测到,但导致网络的分类非常不同;本文探讨了预测的梯度下降(PGD)攻击和适应性面罩分块攻击(ASMA)在DiepLabV3图像分块模型上使用两种结构:MovedNetV3和ResNet50, 发现PGD非常一贯地将分割成其目标,而将ASMA一般化为多级目标并不有效;然而,这种攻击的存在使所有图像分类深层学习网络面临被利用的危险。