Measuring and alleviating the discrepancies between the synthetic (source) and real scene (target) data is the core issue for domain adaptive semantic segmentation. Though recent works have introduced depth information in the source domain to reinforce the geometric and semantic knowledge transfer, they cannot extract the intrinsic 3D information of objects, including positions and shapes, merely based on 2D estimated depth. In this work, we propose a novel Geometry-Aware Network for Domain Adaptation (GANDA), leveraging more compact 3D geometric point cloud representations to shrink the domain gaps. In particular, we first utilize the auxiliary depth supervision from the source domain to obtain the depth prediction in the target domain to accomplish structure-texture disentanglement. Beyond depth estimation, we explicitly exploit 3D topology on the point clouds generated from RGB-D images for further coordinate-color disentanglement and pseudo-labels refinement in the target domain. Moreover, to improve the 2D classifier in the target domain, we perform domain-invariant geometric adaptation from source to target and unify the 2D semantic and 3D geometric segmentation results in two domains. Note that our GANDA is plug-and-play in any existing UDA framework. Qualitative and quantitative results demonstrate that our model outperforms state-of-the-arts on GTA5->Cityscapes and SYNTHIA->Cityscapes.
翻译:测量和缩小合成(源)数据与真实场景(目标)数据之间的差异是领域适应性语义分解的核心问题。尽管最近的工程在源域引入了深度信息以强化几何学和语义知识转移,但仅仅根据2D估计深度,它们无法提取包括位置和形状在内的天体三维内在信息,包括位置和形状。在这项工作中,我们提议建立一个新型的Domain适应大地测量软件网络(GANDA),利用更紧凑的 3D 几何点云表示来缩小域间差距。特别是,我们首先利用源域的辅助深度监督,以获得目标域内的深度预测,从而完成结构-通俗性分解。除了深度估计外,我们还明确利用RGB-D图像生成的点云层三维地形学,以在目标域内进一步协调-颜色分解和假标签完善。此外,为了改进目标域内的 2D分类,我们从源到目标区进行域间变等的地理测量调整,并统一目标域内的2D- 语义和3D-C 模型分解分解分解结果在两个域内显示我们GA 和GAAFSY-DA 的内的任何结果。