Image segmentation is a fundamental problem in medical image analysis. In recent years, deep neural networks achieve impressive performances on many medical image segmentation tasks by supervised learning on large manually annotated data. However, expert annotations on big medical datasets are tedious, expensive or sometimes unavailable. Weakly supervised learning could reduce the effort for annotation but still required certain amounts of expertise. Recently, deep learning shows a potential to produce more accurate predictions than the original erroneous labels. Inspired by this, we introduce a very weakly supervised learning method, for cystic lesion detection and segmentation in lung CT images, without any manual annotation. Our method works in a self-learning manner, where segmentation generated in previous steps (first by unsupervised segmentation then by neural networks) is used as ground truth for the next level of network learning. Experiments on a cystic lung lesion dataset show that the deep learning could perform better than the initial unsupervised annotation, and progressively improve itself after self-learning.
翻译:在医学图像分析中,图像分割是一个根本性的问题。近年来,深神经网络通过监督地学习大型人工附加说明的数据,在许多医学图像分割任务上取得了令人印象深刻的成绩。然而,关于大型医疗数据集的专家说明是乏味的、昂贵的或有时没有的。 薄弱的监管下的学习可以减少批注努力,但仍需要一定数量的专业知识。 最近,深知识显示有可能产生比原始错误标签更准确的预测。 受此启发,我们引入了一种非常薄弱的监管的学习方法,用于肺部CT图像中的细胞损伤检测和分解,而没有任何人工笔记。我们的方法以自学的方式工作,在前几个步骤中产生的分解(先由神经网络不经监督的分解)被用作下一个层次网络学习的地面真相。 对细胞肺损伤数据集的实验表明,深度学习可以比最初的未经校正的注解效果更好,并在自我学习后逐步改进。