For the last few decades, several major subfields of artificial intelligence including computer vision, graphics, and robotics have progressed largely independently from each other. Recently, however, the community has realized that progress towards robust intelligent systems such as self-driving cars requires a concerted effort across the different fields. This motivated us to develop KITTI-360, successor of the popular KITTI dataset. KITTI-360 is a suburban driving dataset which comprises richer input modalities, comprehensive semantic instance annotations and accurate localization to facilitate research at the intersection of vision, graphics and robotics. For efficient annotation, we created a tool to label 3D scenes with bounding primitives and developed a model that transfers this information into the 2D image domain, resulting in over 150k semantic and instance annotated images and 1B annotated 3D points. Moreover, we established benchmarks and baselines for several tasks relevant to mobile perception, encompassing problems from computer vision, graphics, and robotics on the same dataset. KITTI-360 will enable progress at the intersection of these research areas and thus contributing towards solving one of our grand challenges: the development of fully autonomous self-driving systems.
翻译:在过去几十年里,包括计算机视觉、图形和机器人在内的人工智能的几大子领域基本上相互独立地取得了进步。然而,最近,社区已经认识到,朝向强健智能系统(如自驾汽车)的进展需要在不同领域作出协调一致的努力。这促使我们开发了KITTI-360,这是广受欢迎的KITTI数据集的继承者。KITTI-360是一个郊区驱动数据集,由更丰富的输入模式、全面的语义实例说明和准确的本地化组成,以促进在视觉、图形和机器人交汇处的研究。为了高效的注解,我们创建了一个工具,将3D场景标为连接的原始人,并开发了一个模型,将这一信息传输到2D图像域,结果产生了150k语义和实例附加说明的图像和1B 3D点。此外,我们为与移动感知有关的若干任务确定了基准和基准,其中包括计算机视觉、图形和同一数据集的机器人等问题。KITTI-360将使得这些研究领域的交叉点取得进展,从而有助于解决我们的一个重大挑战:完全自主的自我驱动系统的发展。