In this paper, we demonstrate light field triangulation to determine depth distances and baselines in a plenoptic camera. Advances in micro lenses and image sensors have enabled plenoptic cameras to capture a scene from different viewpoints with sufficient spatial resolution. While object distances can be inferred from disparities in a stereo viewpoint pair using triangulation, this concept remains ambiguous when applied in the case of plenoptic cameras. We present a geometrical light field model allowing the triangulation to be applied to a plenoptic camera in order to predict object distances or specify baselines as desired. It is shown that distance estimates from our novel method match those of real objects placed in front of the camera. Additional benchmark tests with an optical design software further validate the model's accuracy with deviations of less than +-0.33 % for several main lens types and focus settings. A variety of applications in the automotive and robotics field can benefit from this estimation model.
翻译:在本文中,我们展示了光场三角图,以确定光学相机的深度距离和基线。微镜和图像传感器的进步使光谱相机能够从不同角度拍摄场景,并有足够的空间分辨率。虽然使用三角镜可以从立体相对立立体视图的差异中推断出物体距离,但在对全光照相机适用时,这一概念仍然模糊不清。我们提出了一个几何光场模型,允许对光谱相机应用三角图,以便预测物体距离或按要求指定基线。我们的新颖方法的距离估计与摄影机面前的实际物体的距离相匹配。光学设计软件的额外基准测试进一步验证了模型的准确性,若干主要镜头类型和焦点环境的偏差低于+-0.33%。这一估算模型可以使汽车和机器人领域的各种应用获益于这一估算模型。