The rapid growth of real-time huge data capturing has pushed the deep learning and data analytic computing to the edge systems. Real-time object recognition on the edge is one of the representative deep neural network (DNN) powered edge systems for real-world mission-critical applications, such as autonomous driving and augmented reality. While DNN powered object detection edge systems celebrate many life-enriching opportunities, they also open doors for misuse and abuse. This paper presents three Targeted adversarial Objectness Gradient attacks, coined as TOG, which can cause the state-of-the-art deep object detection networks to suffer from object-vanishing, object-fabrication, and object-mislabeling attacks. We also present a universal objectness gradient attack to use adversarial transferability for black-box attacks, which is effective on any inputs with negligible attack time cost, low human perceptibility, and particularly detrimental to object detection edge systems. We report our experimental measurements using two benchmark datasets (PASCAL VOC and MS COCO) on two state-of-the-art detection algorithms (YOLO and SSD). The results demonstrate serious adversarial vulnerabilities and the compelling need for developing robust object detection systems.
翻译:实时巨量数据采集的迅速增长将深度学习和数据分析计算推向边缘系统。边缘的实时物体识别是具有代表性的深神经网络(DNN)动力边缘系统之一,用于现实世界任务关键应用,例如自主驱动和增强现实。虽然DNN动力物体探测边缘系统可以庆祝许多增加生命的机会,但也为滥用和滥用打开了大门。本文件介绍了三种有针对性的对抗目标目标目标渐变式攻击,即TOG,这可以导致最先进的深物体探测网络遭受破坏物体、物体制造和误贴物体攻击。我们还展示了通用目标梯度攻击率攻击时对黑箱攻击的对抗性转移性攻击,这对攻击时间成本微不足道、人类可感知性低、对物体探测边缘系统特别有害的任何投入有效。我们用两个基准数据集(PASCAL VOC和MSCOCO)报告我们的实验性测量结果,用于两种状态的物体探测系统(YOLO 和SSD) 的可靠对立性和对立性系统。