Semantic Scene Completion (SSC) transforms an image of single-view depth and/or RGB 2D pixels into 3D voxels, each of whose semantic labels are predicted. SSC is a well-known ill-posed problem as the prediction model has to "imagine" what is behind the visible surface, which is usually represented by Truncated Signed Distance Function (TSDF). Due to the sensory imperfection of the depth camera, most existing methods based on the noisy TSDF estimated from depth values suffer from 1) incomplete volumetric predictions and 2) confused semantic labels. To this end, we use the ground-truth 3D voxels to generate a perfect visible surface, called TSDF-CAD, and then train a "cleaner" SSC model. As the model is noise-free, it is expected to focus more on the "imagination" of unseen voxels. Then, we propose to distill the intermediate "cleaner" knowledge into another model with noisy TSDF input. In particular, we use the 3D occupancy feature and the semantic relations of the "cleaner self" to supervise the counterparts of the "noisy self" to respectively address the above two incorrect predictions. Experimental results validate that our method improves the noisy counterparts with 3.1% IoU and 2.2% mIoU for measuring scene completion and SSC, and also achieves new state-of-the-art accuracy on the popular NYU dataset.
翻译:----
语义场景补全 (SSC) 的任务是将基于单视图深度和/或 RGB 2D 像素的图像转换为 3D 体素,并预测每个体素的语义标签。SSC 是一个众所周知的无解问题,因为预测模型必须“想象”可见表面背后的内容,而通常由 Truncated Signed Distance Function (TSDF) 表示。由于深度相机的感知缺陷,基于从深度值估计的嘈杂 TSDF 的现有方法大多数遭受 1) 不完整的体积预测和 2) 语义标签混淆的问题。因此,我们使用真实的 3D 体素生成一个完美的可见表面,称为 TSDF-CAD,并训练一个“更干净”的 SSC 模型。由于该模型是无噪声的,预期更加关注未见体素的“想象”。然后,我们提出将中间的“清洁自我”知识蒸馏到另一个具有嘈杂 TSDF 输入的模型中。具体而言,我们使用“清洁自我”的 3D 占据特征和语义关系来监督“嘈杂自我”的对应体素,分别解决上述错误预测。实验结果验证了我们的方法提高了“嘈杂自我”的性能,对于测量场景补全和 SSC 的 IoU 和 mIoU 分别提高了 3.1% 和 2.2%,还在流行的 NYU 数据集上实现了新的最佳精度。