Self-supervised monocular depth estimation (MDE) models universally suffer from the notorious edge-fattening issue. Triplet loss, popular for metric learning, has made a great success in many computer vision tasks. In this paper, we redesign the patch-based triplet loss in MDE to alleviate the ubiquitous edge-fattening issue. We show two drawbacks of the raw triplet loss in MDE and demonstrate our problem-driven redesigns. First, we present a min. operator based strategy applied to all negative samples, to prevent well-performing negatives sheltering the error of edge-fattening negatives. Second, we split the anchor-positive distance and anchor-negative distance from within the original triplet, which directly optimizes the positives without any mutual effect with the negatives. Extensive experiments show the combination of these two small redesigns can achieve unprecedented results: Our powerful and versatile triplet loss not only makes our model outperform all previous SoTA by a large margin, but also provides substantial performance boosts to a large number of existing models, while introducing no extra inference computation at all.
翻译:自我监督的单眼深度估计模式普遍受到臭名昭著的边缘脂肪问题的影响。 广受光学学习欢迎的Triblet损失在很多计算机视觉任务中取得了巨大成功。 在本文中, 我们重新设计了MDE基于四重基的三重损失, 以缓解无处不在的边缘脂肪问题。 我们显示了MDE中原始三重损失的两个缺陷, 并展示了我们的问题驱动重新设计。 首先, 我们提出了一个以分钟为基础的操作员战略, 适用于所有负面样本, 以防止业绩良好的负差, 以掩盖边缘脂肪负差的错误。 其次, 我们将锚- 阳性距离和锚- 偏差距离从最初的三重线内部分割开来, 直接优化正差, 而不产生任何共同效果。 广泛的实验表明, 这两种小型重新设计的组合可以取得前所未有的结果: 我们的强大和多面三重损失不仅让我们的模型大大超出先前的SETA, 并且还为大量现有模型提供实际的性推力推进力, 同时不引入任何额外的计算。