This report serves as a supplementary document for TaskPrompter, detailing its implementation on a new joint 2D-3D multi-task learning benchmark based on Cityscapes-3D. TaskPrompter presents an innovative multi-task prompting framework that unifies the learning of (i) task-generic representations, (ii) task-specific representations, and (iii) cross-task interactions, as opposed to previous approaches that separate these learning objectives into different network modules. This unified approach not only reduces the need for meticulous empirical structure design but also significantly enhances the multi-task network's representation learning capability, as the entire model capacity is devoted to optimizing the three objectives simultaneously. TaskPrompter introduces a new multi-task benchmark based on Cityscapes-3D dataset, which requires the multi-task model to concurrently generate predictions for monocular 3D vehicle detection, semantic segmentation, and monocular depth estimation. These tasks are essential for achieving a joint 2D-3D understanding of visual scenes, particularly in the development of autonomous driving systems. On this challenging benchmark, our multi-task model demonstrates strong performance compared to single-task state-of-the-art methods and establishes new state-of-the-art results on the challenging 3D detection and depth estimation tasks.
翻译:本文作为TaskPrompter的补充文档,详细介绍了其在Cityscapes-3D上的新型联合2D-3D多任务学习基准的实现。TaskPrompter提出了一种创新的多任务提示框架,将学习(i)任务通用表示、(ii)任务特定表示和(iii)跨任务交互三个目标统一在一起,而不是将这些学习目标分为不同的网络模块。这种统一方法不仅减少了对经验结构设计的需求,而且显著增强了多任务网络的表示学习能力,因为整个模型容量都用于同时优化这三个目标。TaskPrompter基于Cityscapes-3D数据集引入了一个新的多任务基准,要求多任务模型同时生成单眼三维车辆检测、语义分割和单眼深度估计的预测结果。这些任务对于实现对视觉场景的联合2D-3D理解尤其重要,特别是在自动驾驶系统的开发中。在这个具有挑战性的基准测试中,我们的多任务模型表现出了与单任务最先进方法相比的优异性能,并在具有挑战性的3D检测和深度估计任务上建立了新的最先进结果。