High-accuracy per-pixel depth is vital for computational photography, so smartphones now have multimodal camera systems with time-of-flight (ToF) depth sensors and multiple color cameras. However, producing accurate high-resolution depth is still challenging due to the low resolution and limited active illumination power of ToF sensors. Fusing RGB stereo and ToF information is a promising direction to overcome these issues, but a key problem remains: to provide high-quality 2D RGB images, the main color sensor's lens is optically stabilized, resulting in an unknown pose for the floating lens that breaks the geometric relationships between the multimodal image sensors. Leveraging ToF depth estimates and a wide-angle RGB camera, we design an automatic calibration technique based on dense 2D/3D matching that can estimate camera extrinsic, intrinsic, and distortion parameters of a stabilized main RGB sensor from a single snapshot. This lets us fuse stereo and ToF cues via a correlation volume. For fusion, we apply deep learning via a real-world training dataset with depth supervision estimated by a neural reconstruction method. For evaluation, we acquire a test dataset using a commercial high-power depth camera and show that our approach achieves higher accuracy than existing baselines.
翻译:高精度的每像素深度对于计算摄影至关重要,因此智能手机现在拥有带有飞行时间深度传感器和多色照相机的多式照相机系统。然而,由于 ToF 传感器的分辨率低,且主动照明能力有限,生成准确的高分辨率深度仍具有挑战性。使用 RGB 立体和 ToF 信息是克服这些问题的一个很有希望的方向,但关键问题仍然存在:提供高质量的 2D RGB 图像,主要颜色传感器的镜头是光学稳定,从而给破坏多式图像传感器之间几何关系的浮镜头造成未知的外观。利用对F 深度估计和广角 RGB 摄像机的深度估计,我们设计了一种基于密度 2D/3D 匹配的自动校准技术,该匹配可以估计一个单一的光谱中稳定的主要 RGB 传感器的外观、内在和扭曲参数。这让我们通过一个相关音量的导管立体和 ToF 提示。对于聚合来说,我们通过一个通过一个通过由更高级的摄像头重建方法估计的深度监控的实际世界培训数据集进行深层次学习。我们用一种比现有的深度的方法,我们获得一种高的测试基准。