Deep neural networks have achieved remarkable success in a wide variety of natural image and medical image computing tasks. However, these achievements indispensably rely on accurately annotated training data. If encountering some noisy-labeled images, the network training procedure would suffer from difficulties, leading to a sub-optimal classifier. This problem is even more severe in the medical image analysis field, as the annotation quality of medical images heavily relies on the expertise and experience of annotators. In this paper, we propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification from noisy-labeled data to combat the lack of high quality annotated medical data. Specifically, we employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples. Then, the clean samples are trained by a collaborative training strategy to eliminate the disturbance from imperfect labeled samples. Notably, we further design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples in a self-supervised manner. We evaluated our proposed robust learning strategy on four public medical image classification datasets with three types of label noise,ie,random noise, computer-generated label noise, and inter-observer variability noise. Our method outperforms other learning from noisy label methods and we also conducted extensive experiments to analyze each component of our method.
翻译:深心神经网络在各种各样的自然图像和医学图像计算任务中取得了显著的成功。然而,这些成就必然依赖于准确的附加说明的培训数据。如果遇到一些噪音标签的图像,网络培训程序将面临困难,导致出现一个亚最佳分类者。在医学图像分析领域,这一问题甚至更为严重,因为医疗图像的注释质量严重依赖说明员的专长和经验。在本文中,我们提出一个新的合作培训模式,与全球和地方代表机构学习从噪音标签数据中进行稳健的医疗图像分类,以克服缺乏高品质附加说明的医疗数据。具体地说,我们使用带有噪音标签过滤器的自我强化模型来有效选择清洁和噪音样品。然后,清洁样品通过合作培训战略得到培训,以消除不完善的标签样本造成的干扰。我们进一步设计新的全球和地方代表学习计划,以自我超强的方式使网络使用噪音样本。我们提出的关于四个公共医学分类数据集的强有力学习战略,其中含有三种类型的标签噪音过滤器,以及我们从每类标签变异性、每个标签分析方法到其他的升级方法。