Accurate segmentation of prostate and surrounding organs at risk is important for prostate cancer radiotherapy treatment planning. We present a fully automated workflow for male pelvic CT image segmentation using deep learning. The architecture consists of a 2D localization network followed by a 3D segmentation network for volumetric segmentation of prostate, bladder, rectum, and femoral heads. We used a multi-channel 2D U-Net followed by a 3D U-Net with encoding arm modified with aggregated residual networks, known as ResNeXt. The models were trained and tested on a pelvic CT image dataset comprising 136 patients. Test results show that 3D U-Net based segmentation achieves mean (SD) Dice coefficient values of 90 (2.0)% ,96 (3.0)%, 95 (1.3)%, 95 (1.5)%, and 84 (3.7)% for prostate, left femoral head, right femoral head, bladder, and rectum, respectively, using the proposed fully automated segmentation method.
翻译:对前列腺癌放射治疗规划而言,风险前列腺和周围器官的准确分解非常重要。 我们通过深层学习为男性骨盆CT图像分解提供了一个完全自动化的工作流程。 结构由2D本地化网络和3D分解网络组成,随后是前列腺、膀胱、直肠和胎儿头部的体积分解网络。 我们使用多通道2D U-Net,随后是3D U-Net,其编码臂由综合残留网络(称为ResNeXt)修改。 这些模型在由136名患者组成的骨盆CT图像数据集上进行了培训和测试。 测试结果表明,基于3DU的分解分别使用拟议的完全自动化分解法,在前列腺、左侧头、右侧头部、膀胱、膀胱和直肠上分别实现了90(SD)狄氏系数值(2.0%)、96(3.0)、95(1.3%)、95(1.5%)和84(3.7%)的平均值。