Object detection in 3D with stereo cameras is an important problem in computer vision, and is particularly crucial in low-cost autonomous mobile robots without LiDARs. Nowadays, most of the best-performing frameworks for stereo 3D object detection are based on dense depth reconstruction from disparity estimation, making them extremely computationally expensive. To enable real-world deployments of vision detection with binocular images, we take a step back to gain insights from 2D image-based detection frameworks and enhance them with stereo features. We incorporate knowledge and the inference structure from real-time one-stage 2D/3D object detector and introduce a light-weight stereo matching module. Our proposed framework, YOLOStereo3D, is trained on one single GPU and runs at more than ten fps. It demonstrates performance comparable to state-of-the-art stereo 3D detection frameworks without usage of LiDAR data. The code will be published in https://github.com/Owen-Liuyuxuan/visualDet3D.
翻译:立体立体立体摄像机3D天体探测是计算机视觉中的一个重要问题,对于没有立体成像仪的低成本自主移动机器人来说尤为关键。 如今,立体立体天体探测的大多数最佳框架是基于从差异估计中进行密集深度重建,使其在计算上极其昂贵。为了能够用双筒望远镜图像实际部署立体视像探测,我们退后一步从基于2D图像的探测框架获得洞察,并用立体特征加强它们。我们从实时的1级2D/3D天体探测器中引入了知识和推断结构,并引入了轻量立体立体匹配模块。我们拟议的框架(YOLOStereo3D)在单一的GPU上接受了培训,运行速度超过10英尺。它展示了与最新立体立体立体3D探测框架相似的性能,无需使用LDAR数据。该代码将在https://github.com/Owen-Liuyuxuuuuuan/vivialD上公布。