We introduce MGNet, a multi-task framework for monocular geometric scene understanding. We define monocular geometric scene understanding as the combination of two known tasks: Panoptic segmentation and self-supervised monocular depth estimation. Panoptic segmentation captures the full scene not only semantically, but also on an instance basis. Self-supervised monocular depth estimation uses geometric constraints derived from the camera measurement model in order to measure depth from monocular video sequences only. To the best of our knowledge, we are the first to propose the combination of these two tasks in one single model. Our model is designed with focus on low latency to provide fast inference in real-time on a single consumer-grade GPU. During deployment, our model produces dense 3D point clouds with instance aware semantic labels from single high-resolution camera images. We evaluate our model on two popular autonomous driving benchmarks, i.e., Cityscapes and KITTI, and show competitive performance among other real-time capable methods. Source code is available at https://github.com/markusschoen/MGNet.
翻译:我们引入了MGNet,这是一个用于单眼几何场景理解的多任务框架。我们把单眼几何场景理解定义为两种已知任务的组合:光学分解和自监督单眼深度估计。光学分解不仅从地震角度,而且以实例方式捕捉整个场景。自我监督的单眼深度估计使用摄影机测量模型产生的几何限制,以便测量单眼视频序列的深度。据我们所知,我们首先将这两项任务合并为一个模型。我们的模式设计为低延度,在单一消费者级GPU上实时提供快速推断。在部署期间,我们的模型产生密度的三维点云,通过单个高分辨率摄像片图像进行感知的语义标志。我们用两个流行的自主驾驶基准,即城市景象和KITTI,来评估我们的模型,并展示其他实时能力方法中的竞争性性能。源代码可在 https://github.com/markschoen/MGNet上查到。