Advances in deep learning have resulted in steady progress in computer vision with improved accuracy on tasks such as object detection and semantic segmentation. Nevertheless, deep neural networks are vulnerable to adversarial attacks, thus presenting a challenge in reliable deployment. Two of the prominent tasks in 3D scene-understanding for robotics and advanced drive assistance systems are monocular depth and pose estimation, often learned together in an unsupervised manner. While studies evaluating the impact of adversarial attacks on monocular depth estimation exist, a systematic demonstration and analysis of adversarial perturbations against pose estimation are lacking. We show how additive imperceptible perturbations can not only change predictions to increase the trajectory drift but also catastrophically alter its geometry. We also study the relation between adversarial perturbations targeting monocular depth and pose estimation networks, as well as the transferability of perturbations to other networks with different architectures and losses. Our experiments show how the generated perturbations lead to notable errors in relative rotation and translation predictions and elucidate vulnerabilities of the networks.
翻译:深层学习的进展导致计算机视野的稳步进展,使物体探测和语义分离等任务的准确性得到提高;然而,深神经网络很容易受到对抗性攻击,从而对可靠的部署构成挑战; 3D现场对机器人和先进的驱动器协助系统的两种突出任务都是单视深度,并往往在未经监督的情况下一起进行估计; 评估对立性攻击对单视深度估计的影响的研究虽然存在,但缺乏系统演示和分析对立性对立性对称估计的扰动; 我们表明,添加式的不可感知扰动性不仅能够改变预测,从而增加其轨迹漂移,而且能够灾难性地改变其几何特征; 我们还研究以单眼深度和形成估计网络为目标的对立性对立性干扰之间的关系,以及扰动到具有不同结构和损失的其他网络的可转移性; 我们的实验表明,由此产生的扰动如何导致相对旋转和翻译预测的明显错误,并阐明网络的脆弱性。