One of the main obstacles to 3D semantic segmentation is the significant amount of endeavor required to generate expensive point-wise annotations for fully supervised training. To alleviate manual efforts, we propose GIDSeg, a novel approach that can simultaneously learn segmentation from sparse annotations via reasoning global-regional structures and individual-vicinal properties. GIDSeg depicts global- and individual- relation via a dynamic edge convolution network coupled with a kernelized identity descriptor. The ensemble effects are obtained by endowing a fine-grained receptive field to a low-resolution voxelized map. In our GIDSeg, an adversarial learning module is also designed to further enhance the conditional constraint of identity descriptors within the joint feature distribution. Despite the apparent simplicity, our proposed approach achieves superior performance over state-of-the-art for inferencing 3D dense segmentation with only sparse annotations. Particularly, with $5\%$ annotations of raw data, GIDSeg outperforms other 3D segmentation methods.
翻译:3D 语义分割的主要障碍之一是为充分监督的培训制作昂贵的点点说明需要付出大量的努力。为了减轻人工劳动,我们建议GIDSeg, 这是一种通过推理全球- 区域结构和个人- 子属性,同时从稀少的注释中学习分解的新办法。 GIDSeg 描述全球和个人关系的方式是动态边缘变异网络,加上一个内分解身份描述符。通过将精细的容留场用在低分辨率的氧化性地图上获得连带效应。在我们的GIDSeg 中,一个对抗性学习模块还旨在进一步加强共同特征分布中身份描述器的有条件限制。尽管显然简单,我们提出的方法在推断3D 密度分解点时取得了优于状态的优异性表现,只有稀薄的描述。 特别是,以5 ⁇ 美元的原始数据说明, GIDSeg 优于其他3D分解法。