The automatic localization and standardization of anatomical planes in 3D medical imaging remains a challenging problem due to variability in object pose, appearance, and image quality. In 3D ultrasound, these challenges are exacerbated by speckle noise and limited contrast, particularly in fetal imaging. To address these challenges in the context of facial assessment, we present: 1) GT++, a robust algorithm that estimates standard facial planes from 3D US volumes using annotated anatomical landmarks; and 2) 3DFETUS, a deep learning model that automates and standardizes their localization in 3D fetal US volumes. We evaluated our methods both qualitatively, through expert clinical review, and quantitatively. The proposed approach achieved a mean translation error of 3.21 $\pm$ 1.98mm and a mean rotation error of 5.31 $\pm$ 3.945$^\circ$ per plane, outperforming other state-of-the-art methods on 3D US volumes. Clinical assessments further confirmed the effectiveness of both GT++ and 3DFETUS, demonstrating statistically significant improvements in plane estimation accuracy.
翻译:三维医学影像中解剖平面的自动定位与标准化,由于物体姿态、外观及图像质量的差异性,仍是一个具有挑战性的问题。在三维超声成像中,散斑噪声和有限的对比度加剧了这些挑战,尤其在胎儿成像中。为应对面部评估领域的这些挑战,我们提出:1)GT++,一种利用标注的解剖标志点从三维超声体数据中估计标准面部平面的鲁棒算法;以及 2)3DFETUS,一种在三维胎儿超声体数据中实现平面定位自动化与标准化的深度学习模型。我们通过专家临床评审进行定性评估,并进行了定量分析。所提方法在每平面上实现了 3.21 $\pm$ 1.98mm 的平均平移误差和 5.31 $\pm$ 3.945$^\circ$ 的平均旋转误差,在三维超声体数据上优于其他先进方法。临床评估进一步证实了 GT++ 和 3DFETUS 的有效性,显示出在平面估计精度方面具有统计学意义的显著提升。