In this work, we introduce panoramic panoptic segmentation as the most holistic scene understanding both in terms of field of view and image level understanding. A complete surrounding understanding provides a maximum of information to the agent, which is essential for any intelligent vehicle in order to make informed decisions in a safety-critical dynamic environment such as real-world traffic. In order to overcome the lack of annotated panoramic images, we propose a framework which allows model training on standard pinhole images and transfers the learned features to a different domain. Using our proposed method, we manage to achieve significant improvements of over 5\% measured in PQ over non-adapted models on our Wild Panoramic Panoptic Segmentation (WildPPS) dataset. We show that our proposed Panoramic Robust Feature (PRF) framework is not only suitable to improve performance on panoramic images but can be beneficial whenever model training and deployment are executed on data taken from different distributions. As an additional contribution, we publish WildPPS: The first panoramic panoptic image dataset to foster progress in surrounding perception.
翻译:在这项工作中,我们引入了全景光谱部分,作为在视觉领域和图像水平理解方面最全面的场景理解。整个周围了解为代理提供了最大程度的信息,这对于任何智能载体在真实世界交通等安全关键动态环境中做出知情决定至关重要。为了克服无注释全景图像的缺乏,我们提出了一个框架,允许对标准针孔图像进行示范培训,并将学到的特征传送到不同的领域。我们采用我们提议的方法,设法大大改进了在PQ中测量的超过5 ⁇ 的无适应模型,即我们的野生全景光谱分割数据集。我们表明,我们拟议的全景图示图案框架不只适合改进全景图像的性能,而且只要对不同分布的数据进行模型培训和部署,都是有益的。作为额外的贡献,我们出版了《野景图集:第一个全景图集图像集》,以促进周围认知的进展。