Precise segmentation of teeth from intra-oral scanner images is an essential task in computer-aided orthodontic surgical planning. The state-of-the-art deep learning-based methods often simply concatenate the raw geometric attributes (i.e., coordinates and normal vectors) of mesh cells to train a single-stream network for automatic intra-oral scanner image segmentation. However, since different raw attributes reveal completely different geometric information, the naive concatenation of different raw attributes at the (low-level) input stage may bring unnecessary confusion in describing and differentiating between mesh cells, thus hampering the learning of high-level geometric representations for the segmentation task. To address this issue, we design a two-stream graph convolutional network (i.e., TSGCN), which can effectively handle inter-view confusion between different raw attributes to more effectively fuse their complementary information and learn discriminative multi-view geometric representations. Specifically, our TSGCN adopts two input-specific graph-learning streams to extract complementary high-level geometric representations from coordinates and normal vectors, respectively. Then, these single-view representations are further fused by a self-attention module to adaptively balance the contributions of different views in learning more discriminative multi-view representations for accurate and fully automatic tooth segmentation. We have evaluated our TSGCN on a real-patient dataset of dental (mesh) models acquired by 3D intraoral scanners. Experimental results show that our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation. Github: https://github.com/ZhangLingMing1/TSGCNet.
翻译:由内部扫描仪图像生成的牙齿的精确分解是计算机辅助的牙齿表面或口腔外科手术规划中的一项基本任务。最先进的深层学习基础方法往往只是将网状细胞的原始几何属性(即坐标和正常矢量)集中起来,以训练单流网络,用于自动内部扫描仪图像分解。然而,由于不同的原始属性显示完全不同的几何信息,在(低层次)输入阶段对不同原始属性的天真的相融合可能会在描述和区分网状细胞方面造成不必要的混乱,从而妨碍对分解任务高级的测深仪的学习。为了解决这个问题,我们设计了一个双流图形变色网络(即坐标和正常矢量),可以有效地处理不同原始属性之间的视觉混淆,以便更有效地整合其补充信息,并学习具有歧视性的多视角表解。我们TSGCN采用两种输入特定的图形学习流流,以便分别从坐标和正常的向CN取出辅助的高层测算结果。然后,我们设计双流的直径直径直径直径直径直径直径直径直径直径直径直径直径直径直径直径直径直径直径直径直径直径网络的图像网络网络网络网络网络的图图图图图解图解图图解图解图解图解图解解解解图解图解图解图解图解的双向。我们。我们通过在多路路路段段段段段段段段段段段段段段段段段段段段段段的自我分析。我们通过自我演的自我演后,在自我演中,通过自我演进取了更深路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路路段段段路路路路路路路路路路路。