Learning-based visual odometry (VO) algorithms achieve remarkable performance on common static scenes, benefiting from high-capacity models and massive annotated data, but tend to fail in dynamic, populated environments. Semantic segmentation is largely used to discard dynamic associations before estimating camera motions but at the cost of discarding static features and is hard to scale up to unseen categories. In this paper, we leverage the mutual dependence between camera ego-motion and motion segmentation and show that both can be jointly refined in a single learning-based framework. In particular, we present DytanVO, the first supervised learning-based VO method that deals with dynamic environments. It takes two consecutive monocular frames in real-time and predicts camera ego-motion in an iterative fashion. Our method achieves an average improvement of 27.7% in ATE over state-of-the-art VO solutions in real-world dynamic environments, and even performs competitively among dynamic visual SLAM systems which optimize the trajectory on the backend. Experiments on plentiful unseen environments also demonstrate our method's generalizability.
翻译:基于学习的视觉里程计算法已经在常见的静态场景中取得了不俗的性能,得益于高容量模型和大规模的标注数据,但常常无法在动态、繁忙的环境中实现预期的结果。语义分割通常用于在估计摄像机运动之前丢弃动态关联,但代价是放弃了静态特征,并且很难扩展到未见过的类别。本文利用了摄像机自我运动和运动分割之间的相互依赖关系,并展示了二者如何在单一的学习框架中实现联合优化。特别地,我们提出了 DytanVO,这是第一个处理动态环境的基于监督学习的视觉里程计算法。它可以实时采集两帧单目图像,并进行迭代式地预测摄像机自我运动。我们的算法在真实世界的动态环境中,平均达到了 ATE 值 27.7% 的提升,并且在动态可视化SLAM系统中表现出了竞争性。在大量未知环境上的实验也证明了我们的方法的泛化性和鲁棒性。