Self-supervised monocular depth estimation is a salient task for 3D scene understanding. Learned jointly with monocular ego-motion estimation, several methods have been proposed to predict accurate pixel-wise depth without using labeled data. Nevertheless, these methods focus on improving performance under ideal conditions without natural or digital corruptions. A general absence of occlusions is assumed even for object-specific depth estimation. These methods are also vulnerable to adversarial attacks, which is a pertinent concern for their reliable deployment on robots and autonomous driving systems. We propose MIMDepth, a method that adapts masked image modeling (MIM) for self-supervised monocular depth estimation. While MIM has been used to learn generalizable features during pre-training, we show how it could be adapted for direct training of monocular depth estimation. Our experiments show that MIMDepth is more robust to noise, blur, weather conditions, digital artifacts, occlusions, as well as untargeted and targeted adversarial attacks.
翻译:自我监督的单眼深度估计是3D 场景理解的一项突出任务。 通过单眼自我感动估计, 提出了几种方法来预测准确的像素深度而不使用标签数据。 然而, 这些方法侧重于在没有自然或数字腐败的情况下改善理想条件下的性能。 假设即使对特定对象的深度估计也普遍没有隔离。 这些方法也容易受到对抗性攻击的伤害, 这对于机器人和自主驾驶系统的可靠部署来说是一个相关的关切。 我们提议MIMDepth, 这是一种调整蒙面图像模型的方法, 用于自我监督的单眼深度估计。 虽然在培训前使用MIM来学习通用的特征, 但我们展示了如何将其应用于对单眼深度估计的直接培训。 我们的实验显示, MIMDESTH 比较坚固, 以噪音、 模糊、 天气条件、 数字人工制品、 隐蔽性、 以及无针对性和有针对性的对抗性攻击。