Automated segmentation of pathological regions of interest has been shown to aid prognosis and follow up treatment. However, accurate pathological segmentations require high quality of annotated data that can be both cost and time intensive to generate. In this work, we propose an automated two-step method that evaluates the quality of medical images from 3D image stacks using a U-net++ model, such that images that can aid further training of the U-net++ model can be detected based on the disagreement in segmentations produced from the final two layers. Images thus detected can then be used to further fine tune the U-net++ model for semantic segmentation. The proposed QU-net++ model isolates around 10\% of images per 3D stack and can scale across imaging modalities to segment cysts in OCT images and ground glass opacity in Lung CT images with Dice cores in the range 0.56-0.72. Thus, the proposed method can be applied for multi-modal binary segmentation of pathology.
翻译:在这项工作中,我们提出一个自动的两步方法,用U-net++模型来评价3D图像堆中3D图像的质量,这样就可以根据最后两层的分解差异,检测出有助于进一步培训U-net++模型的图像。因此检测到的图像可以用来进一步微调用于语义分解的U-net+模型。拟议的QU-net++模型将每层3D图像的10 ⁇ 左右分离出来,并可以跨越成像模式,将OCT图像中的成像细胞和Dice核心范围为0.56-072的肺部CT图像中的地面玻璃不透明化成像。因此,拟议的方法可以用于多模式的病理分解。