Monocular depth estimation is an essential task in the computer vision community. While tremendous successful methods have obtained excellent results, most of them are computationally expensive and not applicable for real-time on-device inference. In this paper, we aim to address more practical applications of monocular depth estimation, where the solution should consider not only the precision but also the inference time on mobile devices. To this end, we first develop an end-to-end learning-based model with a tiny weight size (1.4MB) and a short inference time (27FPS on Raspberry Pi 4). Then, we propose a simple yet effective data augmentation strategy, called R2 crop, to boost the model performance. Moreover, we observe that the simple lightweight model trained with only one single loss term will suffer from performance bottleneck. To alleviate this issue, we adopt multiple loss terms to provide sufficient constraints during the training stage. Furthermore, with a simple dynamic re-weight strategy, we can avoid the time-consuming hyper-parameter choice of loss terms. Finally, we adopt the structure-aware distillation to further improve the model performance. Notably, our solution named LiteDepth ranks 2nd in the MAI&AIM2022 Monocular Depth Estimation Challenge}, with a si-RMSE of 0.311, an RMSE of 3.79, and the inference time is 37$ms$ tested on the Raspberry Pi 4. Notably, we provide the fastest solution to the challenge. Codes and models will be released at \url{https://github.com/zhyever/LiteDepth}.
翻译:单心深度估算是计算机视觉界的一项基本任务。 虽然巨大的成功方法已经取得了极好的结果, 但大多数都是计算成本昂贵且不适用于实时的在设计性推算。 在本文中, 我们的目标是解决单心深度估算的更实际应用, 解决方案不仅应考虑移动设备上的精确度, 而且还应考虑移动设备上的推断时间。 为此, 我们首先开发一个基于端到端学习的模型, 其重量小( 1.4MB) 且时间短( 在 Raspberry Pi 4 上的27FPS ) 。 然后, 我们提出一个简单而有效的数据增强战略, 叫做 R2 作物, 来提升模型性能。 此外, 我们观察到, 仅用一个单一损失期的简单轻体重模型训练的模型将因性能瓶颈而受损。 为了缓解这一问题, 我们采用了多个损失术语, 来在培训阶段提供足够的限制。 此外, 有了简单的动态再量战略, 我们可以避免耗时超值的超比值选择损失条件。 最后, 我们采用结构蒸馏法, 来进一步改进模型性能, 进一步改进模型的性模型性能性能性能。 。 。 。 。 IM3 IM3 IM 的解决方案 IM 级 IM 3 IM 级 级 级 。