The problem of robustness in adverse weather conditions is considered a significant challenge for computer vision algorithms in the applicants of autonomous driving. Image rain removal algorithms are a general solution to this problem. They find a deep connection between raindrops/rain-streaks and images by mining the hidden features and restoring information about the rain-free environment based on the powerful representation capabilities of neural networks. However, previous research has focused on architecture innovations and has yet to consider the vulnerability issues that already exist in neural networks. This research gap hints at a potential security threat geared toward the intelligent perception of autonomous driving in the rain. In this paper, we propose a universal rain-removal attack (URA) on the vulnerability of image rain-removal algorithms by generating a non-additive spatial perturbation that significantly reduces the similarity and image quality of scene restoration. Notably, this perturbation is difficult to recognise by humans and is also the same for different target images. Thus, URA could be considered a critical tool for the vulnerability detection of image rain-removal algorithms. It also could be developed as a real-world artificial intelligence attack method. Experimental results show that URA can reduce the scene repair capability by 39.5% and the image generation quality by 26.4%, targeting the state-of-the-art (SOTA) single-image rain-removal algorithms currently available.
翻译:恶劣天气条件下的稳健问题被认为是自动驾驶申请者的计算机视觉算法面临的重大挑战。 图像雨水清除算法是解决这个问题的一个一般解决办法。 它们发现雨滴/雨滴和图像之间的深层联系, 挖掘隐藏的地貌, 根据神经网络的强大代表能力恢复无雨环境的信息。 但是, 先前的研究侧重于建筑创新, 尚未考虑神经网络中已经存在的脆弱问题。 这种研究差距暗示着潜在的安全威胁, 以对雨中自主驾驶的智能感知为导向。 在本文中, 我们建议通过生成非增加的空间扰动, 大大降低恢复现场的相似性和图像质量, 来普遍降雨清除算法的脆弱性。 值得注意的是, 这种扰动很难为人类所认识, 并且对不同的目标图像也是一样的。 因此, 尤拉可以被视为一种关键工具, 用来检测图像雨水清除算法的脆弱性。 在本文中,我们提议对图像清除算法的易变换成一个真实世界人造智能算法, 并且通过一次实验性图像分析系统 5 能够降低当前35 的生成系统质量,, 实验性结果显示, 以单一图像变换的系统 能力可以降低当前生成系统 。