Determining accurate bird's eye view (BEV) positions of objects and tracks in a scene is vital for various perception tasks including object interactions mapping, scenario extraction etc., however, the level of supervision required to accomplish that is extremely challenging to procure. We propose a light-weight, weakly supervised method to estimate 3D position of objects by jointly learning to regress the 2D object detections and scene's depth prediction in a single feed-forward pass of a network. Our proposed method extends a center-point based single-shot object detector, and introduces a novel object representation where each object is modeled as a BEV point spatio-temporally, without the need of any 3D or BEV annotations for training and LiDAR data at query time. The approach leverages readily available 2D object supervision along with LiDAR point clouds (used only during training) to jointly train a single network, that learns to predict 2D object detection alongside the whole scene's depth, to spatio-temporally model object tracks as points in BEV. The proposed method is computationally over $\sim$10x efficient compared to recent SOTA approaches while achieving comparable accuracies on KITTI tracking benchmark.
翻译:确定一个场景中物体和轨道的准确鸟眼视位置(BEV)对于各种认知任务至关重要,包括物体相互作用绘图、情景提取等,然而,完成这一任务所需的监督水平对于采购极为困难。我们建议了一种轻量的、监督不力的方法,通过共同学习在网络的单个进进取前传中回归二维物体探测和场景深度预测,来估计物体的三维位置。我们建议的方法扩展了一个以中点为基础的单射线天体探测器,并引入了一种新的天体表示,其中每个天体都以BEV点为模型,不需要任何3D或BEV说明来进行培训和LiDAR数据。该方法利用容易获得的二维天体物体监视和LiDAR点云(仅用于培训期间)共同训练一个单一网络,学会预测与整个场景深度相平行的二维天体探测,以及作为BEV点点的速模模模样物体轨道。拟议的方法是用一个可比较的基数计算出$simx10美元/10x的基调方法,同时实现最近的基点。