This report serves as a supplementary document for TaskPrompter, detailing its implementation on a new joint 2D-3D multi-task learning benchmark based on Cityscapes-3D. TaskPrompter presents an innovative multi-task prompting framework that unifies the learning of (i) task-generic representations, (ii) task-specific representations, and (iii) cross-task interactions, as opposed to previous approaches that separate these learning objectives into different network modules. This unified approach not only reduces the need for meticulous empirical structure design but also significantly enhances the multi-task network's representation learning capability, as the entire model capacity is devoted to optimizing the three objectives simultaneously. TaskPrompter introduces a new multi-task benchmark based on Cityscapes-3D dataset, which requires the multi-task model to concurrently generate predictions for monocular 3D vehicle detection, semantic segmentation, and monocular depth estimation. These tasks are essential for achieving a joint 2D-3D understanding of visual scenes, particularly in the development of autonomous driving systems. On this challenging benchmark, our multi-task model demonstrates strong performance compared to single-task state-of-the-art methods and establishes new state-of-the-art results on the challenging 3D detection and depth estimation tasks.
翻译:本报告是 TaskPrompter 在新的基于 Cityscapes-3D 的联合 2D-3D 多任务学习基准上的实现细节说明。TaskPrompter 提供了一种创新的多任务提示框架,它将学习通用任务表示、任务特定表示和跨任务交互三个学习目标统一到一起,而不是以前的方法将这些学习目标分成不同的网络模块。这种统一的方法不仅减少了对经验结构设计的需求,而且显著增强了多任务网络的表示学习能力,因为整个模型容量都用来同时优化这三个目标。TaskPrompter 在 Cityscapes-3D 数据集上提出了一个新的多任务基准,要求多任务模型同时生成针对单眼 3D 车辆检测、语义分割和单眼深度估计的预测。这些任务对于实现对视觉场景的联合 2D-3D 理解特别重要,尤其是在开发自动驾驶系统方面。在这个具有挑战性的基准上,我们的多任务模型相对于单任务最先进方法表现出了非常强的性能,并在具有挑战性的 3D 检测和深度估计任务上建立了新的最先进结果。