AI-assisted surgeries have drawn the attention of the medical image research community due to their real-world impact on improving surgery success rates. For image-guided surgeries, such as Cochlear Implants (CIs), accurate object segmentation can provide useful information for surgeons before an operation. Recently published image segmentation methods that leverage machine learning usually rely on a large number of manually predefined ground truth labels. However, it is a laborious and time-consuming task to prepare the dataset. This paper presents a novel technique using a self-supervised 3D-UNet that produces a dense deformation field between an atlas and a target image that can be used for atlas-based segmentation of the ossicles. Our results show that our method outperforms traditional image segmentation methods and generates a more accurate boundary around the ossicles based on Dice similarity coefficient and point-to-point error comparison. The mean Dice coefficient is improved by 8.51% with our proposed method.
翻译:AI协助的外科手术已经引起医学图像研究界的注意,因为它们对提高手术成功率产生了实际影响。对于图像导导术,如Cochlear Implans(CIs),精确的物体分解可以在手术前为外科医生提供有用信息。最近公布的图像分解方法,利用机器学习的图像分解方法通常依靠大量手工预先定义的地面真相标签。然而,编制数据集是一项艰巨和耗时的任务。本文展示了一种新型技术,它使用自我监督的3D-UNet,在地图集和目标图象之间产生密集的畸形场,可用于基于图谱的卵囊分解。我们的结果显示,我们的方法超越了传统的图像分解方法,并产生了基于狄氏相似系数和点对点错误比较的更精确的环流体边界。平均狄氏系数根据我们提议的方法提高了8.51%。