Self-supervised monocular depth estimation is a salient task for 3D scene understanding. Learned jointly with monocular ego-motion estimation, several methods have been proposed to predict accurate pixel-wise depth without using labeled data. Nevertheless, these methods focus on improving performance under ideal conditions without natural or digital corruptions. The general absence of occlusions is assumed even for object-specific depth estimation. These methods are also vulnerable to adversarial attacks, which is a pertinent concern for their reliable deployment in robots and autonomous driving systems. We propose MIMDepth, a method that adapts masked image modeling (MIM) for self-supervised monocular depth estimation. While MIM has been used to learn generalizable features during pre-training, we show how it could be adapted for direct training of monocular depth estimation. Our experiments show that MIMDepth is more robust to noise, blur, weather conditions, digital artifacts, occlusions, as well as untargeted and targeted adversarial attacks.
翻译:自我监督的单眼深度估计是3D 场景理解的一项突出任务。 通过单眼自我感动估计, 提出了几种方法来预测准确的像素深度而不使用标签数据。 然而, 这些方法侧重于在没有自然或数字腐败的情况下改善理想条件下的性能。 即使是对特定对象的深度估计, 也假定一般没有隔离。 这些方法也容易受到对抗性攻击, 这对于在机器人和自主驾驶系统中的可靠部署来说是一个相关的关注。 我们提议了MIMDepth, 这是一种调整蒙面图像模型的方法, 用于自我监督的单眼深度估计。 虽然在培训前使用MIM来学习通用特征, 但我们展示了如何将其调整为对单眼深度估计的直接培训。 我们的实验显示, MIMDESTH 更能适应噪音、 模糊、 天气条件、 数字人工制品、 隐蔽性、 以及目标不明确和有针对性的对抗性攻击。