Accurate retinal vessel segmentation is a challenging problem in color fundus image analysis. An automatic retinal vessel segmentation system can effectively facilitate clinical diagnosis and ophthalmological research. Technically, this problem suffers from various degrees of vessel thickness, perception of details, and contextual feature fusion. For addressing these challenges, a deep learning based method has been proposed and several customized modules have been integrated into the well-known encoder-decoder architecture U-net, which is mainly employed in medical image segmentation. Structurally, cascaded dilated convolutional modules have been integrated into the intermediate layers, for obtaining larger receptive field and generating denser encoded feature maps. Also, the advantages of the pyramid module with spatial continuity have been taken, for multi-thickness perception, detail refinement, and contextual feature fusion. Additionally, the effectiveness of different normalization approaches has been discussed in network training for different datasets with specific properties. Experimentally, sufficient comparative experiments have been enforced on three retinal vessel segmentation datasets, DRIVE, CHASEDB1, and the unhealthy dataset STARE. As a result, the proposed method outperforms the work of predecessors and achieves state-of-the-art performance in Sensitivity/Recall, F1-score and MCC.
翻译:为了应对这些挑战,提出了一种深层次的学习方法,并将若干定制模块纳入众所周知的编码器脱coder U-net结构中,主要用于医学图像分割。从结构上看,已经将分层化的相层变异模块纳入了中间层,以获得更大的可接收场和生成更稠密的编码地貌图。此外,为了应对这些挑战,还采取了具有不同程度的船体厚度、对细节的认知和背景地貌的融合,从技术上讲,这一问题具有不同程度的船舶厚度、对细节的认知和背景地貌的融合。此外,在对具有具体特性的不同数据集进行网络培训时,讨论了不同正常化方法的有效性。试验性地说,已经对三个复流式船舶分解数据集、DRIVE、CHASEDD1和不健康的数据集进行了充分的比较试验,以获得更大的可接收场和生成更密集的编码地标地貌图。此外,还采用了具有空间连续性的金字形模块的优势,以多层次感知、细度和背景特征融合为结果,拟议的演算了Startal-CREAR1。