Wide-angle cameras are uniquely positioned for mobile robots, by virtue of the rich information they provide in a small, light, and cost-effective form factor. An accurate calibration of the intrinsics and extrinsics is a critical pre-requisite for using the edge of a wide-angle lens for depth perception and odometry. Calibrating wide-angle lenses with current state-of-the-art techniques yields poor results due to extreme distortion at the edge, as most algorithms assume a lens with low to medium distortion closer to a pinhole projection. In this work we present our methodology for accurate wide-angle calibration. Our pipeline generates an intermediate model, and leverages it to iteratively improve feature detection and eventually the camera parameters. We test three key methods to utilize intermediate camera models: (1) undistorting the image into virtual pinhole cameras, (2) reprojecting the target into the image frame, and (3) adaptive subpixel refinement. Combining adaptive subpixel refinement and feature reprojection significantly improves reprojection errors by up to 26.59 %, helps us detect up to 42.01 % more features, and improves performance in the downstream task of dense depth mapping. Finally, TartanCalib is open-source and implemented into an easy-to-use calibration toolbox. We also provide a translation layer with other state-of-the-art works, which allows for regressing generic models with thousands of parameters or using a more robust solver. To this end, TartanCalib is the tool of choice for wide-angle calibration. Project website and code: http://tartancalib.com.
翻译:宽角照相机对于移动机器人来说具有独特的定位, 因为它们以小、 光和成本效益高的形式要素提供了丰富的信息。 精确校准内在和外表是使用宽角镜头边缘进行深度感知和视光测量的关键先决条件。 用当前最先进的技术校准宽角镜头, 其效果因边缘极端扭曲而差, 因为大多数算法假设透视镜, 低到中等扭曲, 接近针孔投影。 在这项工作中, 我们展示了准确宽角校准的方法。 我们的管道生成了一个中间模型, 并把它用于迭接式地改进地貌探测和最终的摄像参数。 我们测试了三种关键方法来使用中间摄像机模型:(1) 不将图像扭曲成虚拟刺眼相机, (2) 将目标重新投射到图像框中, (3) 调整子像素的精度精度精度精度。 将适应性的子像素精度精度精度精度精度和特征再预测大大改进了直径直径直径直径直径的参数。 在26. 59 上, 帮助我们检测到42. 01% 的模型中, 的中间模型生成更精确的模型,, 以及更精确的校准的校准,,,,, 更精确的校准,,,,,,, 更精确的校正深,,,,,,,,,,,,,,,,, 更精确,,,,,,,,,,,,,,,,, 我们,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,