Calibration of multi-camera systems is a key task for accurate object tracking. However, it remains a challenging problem in real-world conditions, where traditional methods are not applicable due to the lack of accurate floor plans, physical access to place calibration patterns, or synchronized video streams. This paper presents a novel two-stage calibration method that overcomes these limitations. In the first stage, partial calibration of individual cameras is performed based on an operator's annotation of natural geometric primitives (parallel, perpendicular, and vertical lines, or line segments of equal length). This allows estimating key parameters (roll, pitch, focal length) and projecting the camera's Effective Field of View (EFOV) onto the horizontal plane in a base 3D coordinate system. In the second stage, precise system calibration is achieved through interactive manipulation of the projected EFOV polygons. The operator adjusts their position, scale, and rotation to align them with the floor plan or, in its absence, using virtual calibration elements projected onto all cameras in the system. This determines the remaining extrinsic parameters (camera position and yaw). Calibration requires only a static image from each camera, eliminating the need for physical access or synchronized video. The method is implemented as a practical web service. Comparative analysis and demonstration videos confirm the method's applicability, accuracy, and flexibility, enabling the deployment of precise multi-camera tracking systems in scenarios previously considered infeasible.
翻译:多相机系统的标定是实现精确目标跟踪的关键任务。然而,在实际环境中,由于缺乏精确的平面布局图、无法物理放置标定图案或缺少同步视频流,传统方法往往难以适用,这使其成为一个具有挑战性的问题。本文提出了一种新颖的两阶段标定方法,以克服这些限制。第一阶段,基于操作员对自然几何基元(平行线、垂直线、等长线段)的标注,对单个相机进行部分标定。这允许估计关键参数(滚转角、俯仰角、焦距),并将相机的有效视场(EFOV)投影到基础三维坐标系的水平面上。第二阶段,通过交互式操作投影的EFOV多边形实现精确的系统标定。操作员调整其位置、尺度和旋转,使其与平面布局图对齐;若无布局图,则使用投影到系统所有相机上的虚拟标定元素。由此确定剩余的外参(相机位置和偏航角)。该方法仅需每台相机的一幅静态图像,无需物理接触或同步视频。该方法已实现为实用的网络服务。对比分析和演示视频证实了该方法的适用性、准确性和灵活性,使得在以往认为不可行的场景中部署精确的多相机跟踪系统成为可能。