While separately leveraging monocular 3D object detection and 2D multi-object tracking can be straightforwardly applied to sequence images in a frame-by-frame fashion, stand-alone tracker cuts off the transmission of the uncertainty from the 3D detector to tracking while cannot pass tracking error differentials back to the 3D detector. In this work, we propose jointly training 3D detection and 3D tracking from only monocular videos in an end-to-end manner. The key component is a novel spatial-temporal information flow module that aggregates geometric and appearance features to predict robust similarity scores across all objects in current and past frames. Specifically, we leverage the attention mechanism of the transformer, in which self-attention aggregates the spatial information in a specific frame, and cross-attention exploits relation and affinities of all objects in the temporal domain of sequence frames. The affinities are then supervised to estimate the trajectory and guide the flow of information between corresponding 3D objects. In addition, we propose a temporal -consistency loss that explicitly involves 3D target motion modeling into the learning, making the 3D trajectory smooth in the world coordinate system. Time3D achieves 21.4\% AMOTA, 13.6\% AMOTP on the nuScenes 3D tracking benchmark, surpassing all published competitors, and running at 38 FPS, while Time3D achieves 31.2\% mAP, 39.4\% NDS on the nuScenes 3D detection benchmark.
翻译:虽然单独利用单体3D物体探测和2D多目标跟踪可以直截了当地应用到按框架逐一排列的序列图像中,但独立跟踪器切断了将不确定性从三维探测器传送到跟踪的功能,同时无法将误差差的跟踪差追溯到三维探测器。在这项工作中,我们提议以端对端方式从单体视频中联合培训三维探测和三维跟踪。关键组成部分是一个全新的空间时空信息流模块,该模块将几何和外观特征综合起来,以预测当前和过去各框架所有物体的强力相似得分。具体来说,我们利用变异器的注意机制,即自我注意将空间信息从三维探测器传送到跟踪,而不能将误差差差差差差异传送回到三维探测器。在此过程中,我们提议对三维光谱和外观进行联合培训,以估计轨迹并指导相应的三维天体对象之间的信息流动。 此外,我们提议出现时间-一致性损失,明确涉及三维目标在目前和过去各天体框架中进行模拟学习,使三维D轨迹轨迹探测3D标准在特定的384S轨道上顺利进行世界跟踪系统。