Automatic medical image segmentation via convolutional neural networks (CNNs) has shown promising results. However, they may not always be robust enough for clinical use. Sub-optimal segmentation would require clinician's to manually delineate the target object, causing frustration. To address this problem, a novel interactive CNN-based segmentation framework is proposed in this work. The aim is to represent the CNN segmentation contour as B-splines by utilising B-spline explicit active surfaces (BEAS). The interactive element of the framework allows the user to precisely edit the contour in real-time, and by utilising BEAS it ensures the final contour is smooth and anatomically plausible. This framework was applied to the task of 2D segmentation of the levator hiatus from 2D ultrasound (US) images, and compared to the current clinical tools used in pelvic floor disorder clinic (4DView, GE Healthcare; Zipf, Austria). Experimental results show that: 1) the proposed framework is more robust than current state-of-the-art CNNs; 2) the perceived workload calculated via the NASA-TLX index was reduced more than half for the proposed approach in comparison to current clinical tools; and 3) the proposed tool requires at least 13 seconds less user time than the clinical tools, which was significant (p=0.001).
翻译:通过进化神经网络(CNNs)进行的自动医学图像分割显示了有希望的结果。但是,框架的互动要素可能并不总是足够稳健,供临床使用。亚最佳的分割要求临床医生手动描述目标对象,造成挫折。为了解决这个问题,在这项工作中提议了一个基于CNN的新型互动分割框架。目的是通过使用B-spline明显活跃表面(BEAS),将CNN的分割线作为B-spline B-spline活性表面(BEAS),将CNN的分割线作为B-spline。框架的互动要素使用户能够准确实时编辑轮廓,并使用BEAS确保最终轮廓平滑和解剖合理。这个框架用于2D对2D的悬浮图从2D超声(US)图像进行分解的任务,与目前骨盆骨紊乱诊所(4DView, GEHealthcare;Zipf,奥地利)。 实验结果显示:(1) 拟议的框架比当前状态轮廓更坚固,并且使用BEAS确保最后轮廓平平, 和解算算算算算算算算算算出的最短的IMIS 工具,在13秒中,在13号中,在计算到最短的算算算算算算算算算算算算算算算算算算算算算为最小工具。