Advances in neural networks enable tackling complex computer vision tasks such as depth estimation of outdoor scenes at unprecedented accuracy. Promising research has been done on depth estimation. However, current efforts are computationally resource-intensive and do not consider the resource constraints of autonomous devices, such as robots and drones. In this work, we present a fast and battery-efficient approach for depth estimation. Our approach devises model-agnostic curriculum-based learning for depth estimation. Our experiments show that the accuracy of our model performs on par with the state-of-the-art models, while its response time outperforms other models by 71%. All codes are available online at https://github.com/fatemehkarimii/LightDepth.
翻译:神经网络的进步使得能够应对复杂的计算机视觉任务,例如以前所未有的准确度深度估计室外场景等。对深度估计进行了有希望的研究。然而,目前的努力是计算资源密集型的,没有考虑机器人和无人驾驶飞机等自主装置的资源限制。在这项工作中,我们提出了一个快速和电池高效的深度估计方法。我们的方法设计了基于模型和不可知课程的深度估计学习。我们的实验表明,我们模型的准确性与最新模型相同,而其反应时间比其他模型高出71%。所有代码都可以在https://github.com/fatemehkarimi/LightDepeh上查阅。