The use of deep learning for human identification and object detection is becoming ever more prevalent in the surveillance industry. These systems have been trained to identify human body's or faces with a high degree of accuracy. However, there have been successful attempts to fool these systems with different techniques called adversarial attacks. This paper presents a final report for an adversarial attack using visible light on facial recognition systems. The relevance of this research is to exploit the physical downfalls of deep neural networks. This demonstration of weakness within these systems are in hopes that this research will be used in the future to improve the training models for object recognition. As results were gathered the project objectives were adjusted to fit the outcomes. Because of this the following paper initially explores an adversarial attack using infrared light before readjusting to a visible light attack. A research outline on infrared light and facial recognition are presented within. A detailed analyzation of the current findings and possible future recommendations of the project are presented. The challenges encountered are evaluated and a final solution is delivered. The projects final outcome exhibits the ability to effectively fool recognition systems using light.
翻译:在监测行业,利用深层学习发现人类身份和物体探测越来越普遍,这些系统经过培训,以高度精确地识别人体或脸孔,但成功地试图用称为对抗性攻击的不同技术愚弄这些系统;本文件介绍了使用面部识别系统可见光线进行对抗性攻击的最后报告;这项研究的相关性是利用深神经网络的物理陷落;这些系统中的这种弱点表明,希望今后将利用这一研究来改进物体识别培训模式;随着项目目标的收集结果的调整,调整以适应结果;由于以下论文,在对可见的光攻击进行重新校正之前,先用红外线光线进行初步探讨对抗性攻击;关于红外线光和面部识别的研究大纲;对目前的调查结果和项目今后可能提出的建议进行详细分析;对遇到的挑战进行评估,并提出最后解决办法;项目的最后结果显示,利用光有效识别系统的能力。