Recently, self-supervised learning technology has been applied to calculate depth and ego-motion from monocular videos, achieving remarkable performance in autonomous driving scenarios. One widely adopted assumption of depth and ego-motion self-supervised learning is that the image brightness remains constant within nearby frames. Unfortunately, the endoscopic scene does not meet this assumption because there are severe brightness fluctuations induced by illumination variations, non-Lambertian reflections and interreflections during data collection, and these brightness fluctuations inevitably deteriorate the depth and ego-motion estimation accuracy. In this work, we introduce a novel concept referred to as appearance flow to address the brightness inconsistency problem. The appearance flow takes into consideration any variations in the brightness pattern and enables us to develop a generalized dynamic image constraint. Furthermore, we build a unified self-supervised framework to estimate monocular depth and ego-motion simultaneously in endoscopic scenes, which comprises a structure module, a motion module, an appearance module and a correspondence module, to accurately reconstruct the appearance and calibrate the image brightness. Extensive experiments are conducted on the SCARED dataset and EndoSLAM dataset, and the proposed unified framework exceeds other self-supervised approaches by a large margin. To validate our framework's generalization ability on different patients and cameras, we train our model on SCARED but test it on the SERV-CT and Hamlyn datasets without any fine-tuning, and the superior results reveal its strong generalization ability. Code will be available at: \url{https://github.com/ShuweiShao/AF-SfMLearner}.
翻译:最近,自我监督的学习技术被应用到从单视视频中计算深度和自我感动,在自主驱动情景中取得了显著的性能。一个广泛接受的深度和自我感动自我监督学习假设是,图像亮度在附近框架之内保持不变。不幸的是,内窥镜场景不符合这一假设,因为在数据收集过程中,由于光化变异、非地中海的反射和相互反射,存在着严重的亮度波动,这些亮度波动不可避免地使深度和自我感动估计准确度下降。在这项工作中,我们引入了被称为外观流的新概念,以解决亮度不一致问题。外表流考虑到深度和自我感动自我感动自我监督学习的假设,这使我们得以在相近的框内保持一个统一的自我监督框架,由结构模块、运动模范、外观模块和通信模块,以精确的外观和校正图像亮度。在SASRED数据库和EVSLAMAMS总测试框架上进行了广泛的强度实验。我们提出的在总体的自我评估性SLAFMLA/CS的自我评估框架上,将超越了我们总体的自我评估能力。