Relying on deep supervised or self-supervised learning, previous methods for depth completion from paired single image and sparse depth data have achieved impressive performance in recent years. However, facing a new environment where the test data occurs online and differs from the training data in the RGB image content and depth sparsity, the trained model might suffer severe performance drop. To encourage the trained model to work well in such conditions, we expect it to be capable of adapting to the new environment continuously and effectively. To achieve this, we propose MetaComp. It utilizes the meta-learning technique to simulate adaptation policies during the training phase, and then adapts the model to new environments in a self-supervised manner in testing. Considering that the input is multi-modal data, it would be challenging to adapt a model to variations in two modalities simultaneously, due to significant differences in structure and form of the two modal data. Therefore, we further propose to disentangle the adaptation procedure in the basic meta-learning training into two steps, the first one focusing on the depth sparsity while the second attending to the image content. During testing, we take the same strategy to adapt the model online to new multi-modal data. Experimental results and comprehensive ablations show that our MetaComp is capable of adapting to the depth completion in a new environment effectively and robust to changes in different modalities.
翻译:借助深层监督或自我监督的学习,以往从配对单一图像和稀少的深度数据完成深度的方法,近年来取得了令人印象深刻的绩效。然而,面对一种新的环境,测试数据在网上出现,与RGB图像内容和深度宽度的培训数据不同,经过培训的模型可能严重性能下降。为了鼓励经过培训的模型在这种条件下运行良好,我们期望它能够持续和有效地适应新的环境。为了实现这一点,我们提议MetaComp。它利用元学习技术模拟培训阶段的适应政策,然后在测试中以自我监督的方式使模型适应新的环境。考虑到输入是多模式数据,由于两种模式数据的结构和形式差异很大,同时将模型适应两种模式的变化将具有挑战性。因此,我们进一步提议在基本的元学习培训中将适应程序分解为两个步骤,第一个步骤是侧重于深度渗透,第二个步骤是关注图像内容。在测试中,我们采取同样的战略,将模型是多模式的深度,在测试中将模型和模型的深度中有效地调整到新的多模式中,将模型的深度数据转化为新的模型。在测试中,在测试中,在测试中,将模型的深度中,将模型的模型的模型的深度数据在新的模型中将适应到新的多模式中,以新的结果中将有效的模型的深度中,以显示不同的环境。