Camera calibration is an important prerequisite towards the solution of 3D computer vision problems. Traditional methods rely on static images of a calibration pattern. This raises interesting challenges towards the practical usage of event cameras, which notably require image change to produce sufficient measurements. The current standard for event camera calibration therefore consists of using flashing patterns. They have the advantage of simultaneously triggering events in all reprojected pattern feature locations, but it is difficult to construct or use such patterns in the field. We present the first dynamic event camera calibration algorithm. It calibrates directly from events captured during relative motion between camera and calibration pattern. The method is propelled by a novel feature extraction mechanism for calibration patterns, and leverages existing calibration tools before optimizing all parameters through a multi-segment continuous-time formulation. As demonstrated through our results on real data, the obtained calibration method is highly convenient and reliably calibrates from data sequences spanning less than 10 seconds.
翻译:相机校准是解决 3D 计算机视觉问题的一个重要先决条件。 传统方法依赖于校准模式的静态图像。 这给事件相机的实际使用带来了有趣的挑战, 特别是需要改变图像才能产生足够的测量。 因此, 事件相机校准目前的标准包括使用闪烁模式。 它们具有在所有重新预测的模式特征位置同时触发事件的好处, 但是很难在实地构建或使用这种模式。 我们展示了第一个动态事件相机校准算法。 它直接校准了在相机和校准模式相对运动期间所拍摄的事件。 这种方法由一个新的校准模式特征提取机制推动, 并在通过多层连续时间配制优化所有参数之前利用现有校准工具。 正如我们在真实数据上得出的结果所显示的那样, 获得的校准方法非常方便,而且可靠地校准了在不到10秒的时间里的数据序列。