Uncertainty quantification is a key aspect in robotic perception, as overconfident or point estimators can lead to collisions and damages to the environment and the robot. In this paper, we evaluate scalable approaches to uncertainty quantification in single-view supervised depth learning, specifically MC dropout and deep ensembles. For MC dropout, in particular, we explore the effect of the dropout at different levels in the architecture. We demonstrate that adding dropout in the encoder leads to better results than adding it in the decoder, the latest being the usual approach in the literature for similar problems. We also propose the use of depth uncertainty in the application of pseudo-RGBD ICP and demonstrate its potential for improving the accuracy in such a task.
翻译:不确定性的量化是机器人认知的一个关键方面,因为过度自信或点测者可能导致碰撞和对环境和机器人的损害。在本文件中,我们评估了在单一视角监督的深度学习中,特别是MC辍学和深层组合中,对不确定性量化的可扩展方法。特别是对于MC的辍学,我们探索了建筑结构中不同层次的辍学的影响。我们证明,在编码器中添加辍学比在解码器中添加更多的结果,最近的是文献中对类似问题通常采用的方法。我们还提议在应用伪RGBD比较方案时使用深度不确定性,并展示其在提高这项工作的准确性方面的潜力。