Object detection is a comprehensively studied problem in autonomous driving. However, it has been relatively less explored in the case of fisheye cameras. The standard bounding box fails in fisheye cameras due to the strong radial distortion, particularly in the image's periphery. We explore better representations like oriented bounding box, ellipse, and generic polygon for object detection in fisheye images in this work. We use the IoU metric to compare these representations using accurate instance segmentation ground truth. We design a novel curved bounding box model that has optimal properties for fisheye distortion models. We also design a curvature adaptive perimeter sampling method for obtaining polygon vertices, improving relative mAP score by 4.9% compared to uniform sampling. Overall, the proposed polygon model improves mIoU relative accuracy by 40.3%. It is the first detailed study on object detection on fisheye cameras for autonomous driving scenarios to the best of our knowledge. The dataset comprising of 10,000 images along with all the object representations ground truth will be made public to encourage further research. We summarize our work in a short video with qualitative results at https://youtu.be/iLkOzvJpL-A.
翻译:在自动驾驶中,物体探测是一个经过全面研究的问题。 但是,在鱼眼相机中,标准捆绑盒在鱼眼相机中由于强烈的射线扭曲,特别是在图像的边缘,标准捆绑盒在鱼眼相机中失灵。我们探索了更好的表达方式,如定向捆绑盒、椭圆和通用多边形,用于在鱼眼图像中进行物体探测。我们使用IoU衡量尺度,使用精确的体积分割地面真理来比较这些表示方式。我们设计了一个具有鱼眼扭曲模型最佳特性的新的曲线捆绑盒模型。我们还设计了一种曲线适应性边际取样方法,用于获取多边形顶部,比统一取样的相对 mAP分提高4.9%。总体而言,拟议的多边模型将MIoU相对精度提高40.3%。这是我们所了解的关于对鱼眼摄像机进行物体探测以进行自主驾驶情景的首次详细研究。数据集由10 000张图像和所有物体陈述地面真相组成,将公开鼓励进一步研究。我们用一个短视频总结了我们在https://yotube/iLkvlzvJ的工作,并附有定性结果。