3D shape completion from point clouds is a challenging task, especially from scans of real-world objects. Considering the paucity of 3D shape ground truths for real scans, existing works mainly focus on benchmarking this task on synthetic data, e.g. 3D computer-aided design models. However, the domain gap between synthetic and real data limits the generalizability of these methods. Thus, we propose a new task, SCoDA, for the domain adaptation of real scan shape completion from synthetic data. A new dataset, ScanSalon, is contributed with a bunch of elaborate 3D models created by skillful artists according to scans. To address this new task, we propose a novel cross-domain feature fusion method for knowledge transfer and a novel volume-consistent self-training framework for robust learning from real data. Extensive experiments prove our method is effective to bring an improvement of 6%~7% mIoU.
翻译:面向真实点云图像的三维形状完成是一项具有挑战性的任务,特别是针对来自真实世界对象的扫描数据。鉴于缺乏真实扫描数据的三维形状标记,现有方法主要集中于基于合成数据(例如三维计算机辅助设计模型)的基准测试。然而,合成数据与真实数据之间的领域差距限制了这些方法的普适性。因此,我们提出了一项新任务 SCODA,用于从合成数据中完成真实扫描形状的领域自适应。我们提供了一个新的数据集 ScanSalon,其中包括带有扫描数据的由熟练艺术家创建的精美三维模型。为解决这个新任务,我们提出了一种新的跨领域特征融合方法进行知识转移,以及一种新的体积一致性自训练框架来进行真实数据的强化学习。广泛的实验证明,我们的方法有效地提高了6%~7%的平均交并比(mIoU)。