Object detection is a comprehensively studied problem in autonomous driving. However, it has been relatively less explored in the case of fisheye cameras. The strong radial distortion breaks the translation invariance inductive bias of Convolutional Neural Networks. Thus, we present the WoodScape fisheye object detection challenge for autonomous driving which was held as part of the CVPR 2022 Workshop on Omnidirectional Computer Vision (OmniCV). This is one of the first competitions focused on fisheye camera object detection. We encouraged the participants to design models which work natively on fisheye images without rectification. We used CodaLab to host the competition based on the publicly available WoodScape fisheye dataset. In this paper, we provide a detailed analysis on the competition which attracted the participation of 120 global teams and a total of 1492 submissions. We briefly discuss the details of the winning methods and analyze their qualitative and quantitative results.
翻译:在自动驾驶中,物体探测是一个全面研究的问题。然而,在鱼眼照相机方面,对物体探测的探索相对较少。强烈的辐射扭曲打破了进化神经网络的变换偏差。因此,我们介绍了作为CVPR 2022全向计算机视觉(OmniCV)讲习班的一部分举行的“WoodScape鱼眼物体探测自主驾驶的挑战”。这是第一次以鱼眼照相机物体探测为重点的竞赛之一。我们鼓励参与者设计一些模型,这些模型在没有校正的情况下就鱼眼图像进行本地工作。我们利用CodaLab主持以公开提供的WoodScape fisheye数据集为基础的竞赛。我们在本论文中详细分析了吸引120个全球团队参加的竞争情况,总共提交了1492份论文。我们简要讨论了获奖方法的细节,并分析了其质量和数量结果。