A thermal camera can robustly capture thermal radiation images under harsh light conditions such as night scenes, tunnels, and disaster scenarios. However, despite this advantage, neither depth nor ego-motion estimation research for the thermal camera have not been actively explored so far. In this paper, we propose a self-supervised learning method for depth and ego-motion estimation from thermal images. The proposed method exploits multi-spectral consistency that consists of temperature and photometric consistency loss. The temperature consistency loss provides a fundamental self-supervisory signal by reconstructing clipped and colorized thermal images. Additionally, we design a differentiable forward warping module that can transform the coordinate system of the estimated depth map and relative pose from thermal camera to visible camera. Based on the proposed module, the photometric consistency loss can provide complementary self-supervision to networks. Networks trained with the proposed method robustly estimate the depth and pose from monocular thermal video under low-light and even zero-light conditions. To the best of our knowledge, this is the first work to simultaneously estimate both depth and ego-motion from monocular thermal video in a self-supervised manner.
翻译:热摄像头可以在严酷的光线条件下,如在夜幕、隧道和灾害情景下,强有力地捕捉热辐射图像。然而,尽管有这一优势,迄今为止尚未积极探索热摄像头的深度和自我感知估计研究。在本文件中,我们提议了一种由自我监督的深度和自我感知估计方法,从热图像中进行深度和自我感知估计。拟议方法利用由温度和光度一致性损失组成的多光谱一致性;温度一致性损失通过重建剪切热图像和彩色热图像,提供了一个基本的自我监督信号。此外,我们设计了一个不同的前向扭曲模块,可以将估计深度地图的坐标系统和相对面貌从热摄像头转换为可见的相片。根据拟议模块,光度一致性损失可以提供网络的辅助性自我监督视野。以拟议方法训练的网络在低光光线甚至零光条件下对单向热视频的深度和面进行强力估计。据我们所知,这是同时从单向热摄像带以自我控制的方式对深度和自我感动进行估计的第一项工作。