Supervised deep learning performance is heavily tied to the availability of high-quality labels for training. Neural networks can gradually overfit corrupted labels if directly trained on noisy datasets, leading to severe performance degradation at test time. In this paper, we propose a novel deep learning framework, namely Co-Seg, to collaboratively train segmentation networks on datasets which include low-quality noisy labels. Our approach first trains two networks simultaneously to sift through all samples and obtain a subset with reliable labels. Then, an efficient yet easily-implemented label correction strategy is applied to enrich the reliable subset. Finally, using the updated dataset, we retrain the segmentation network to finalize its parameters. Experiments in two noisy labels scenarios demonstrate that our proposed model can achieve results comparable to those obtained from supervised learning trained on the noise-free labels. In addition, our framework can be easily implemented in any segmentation algorithm to increase its robustness to noisy labels.
翻译:受监督的深层学习绩效与培训的高质量标签的提供密切相关。 神经网络如果直接接受关于噪音数据集的直接培训,就会逐渐过度装饰腐败标签,导致测试时间严重性能退化。 在本文中,我们提议了一个全新的深层学习框架,即共同Seg, 以协作方式培训关于包括低质量噪音标签在内的数据集的分化网络。 我们的方法首先培训两个网络,同时筛选所有样本,并获得一个带有可靠标签的子集。 然后,应用一个高效而易于实施的标签校正战略来丰富可靠的子集。 最后,我们利用更新的数据集,重新对分解网络进行再培训,以最后确定其参数。 在两个噪音标签假设中进行的实验表明,我们提议的模型可以取得与在无噪音标签方面受过监督性学习的成果相类似的成果。 此外,我们的框架可以很容易在任何分解算法中实施,以提高对噪音标签的坚固度。