Supervised training of deep neural networks (DNNs) by noisy labels has been studied extensively in image classification but much less in image segmentation. Our understanding of the learning behavior of DNNs trained by noisy segmentation labels remains limited. We address this deficiency in both binary segmentation of biological microscopy images and multi-class segmentation of natural images. We classify segmentation labels according to their noise transition matrices (NTMs) and compare performance of DNNs trained by different types of labels. When we randomly sample a small fraction (e.g., 10%) or flip a large fraction (e.g., 90%) of the ground-truth labels to train DNNs, their segmentation performance remains largely unchanged. This indicates that DNNs learn structures hidden in labels rather than pixel-level labels per se in their supervised training for semantic segmentation. We call these hidden structures meta-structures. When labels with different perturbations to the meta-structures are used to train DNNs, their performance in feature extraction and segmentation degrades consistently. In contrast, addition of meta-structure information substantially improves performance of an unsupervised model in binary semantic segmentation. We formulate meta-structures mathematically as spatial density distributions. We show theoretically and experimentally how this formulation explains key observed learning behavior of DNNs.
翻译:通过噪音标签对深神经网络(DNN)进行由噪音标签监督的深神经网络(DNN)培训,在图像分类方面进行了广泛研究,但在图像分类方面则少得多。我们对DNN通过噪音分解标签培训的学习行为的理解仍然有限。我们解决了生物显微镜图像的二进制分解和自然图像的多等分解两方面的缺陷。我们根据它们噪音过渡矩阵(NTM)对分解标签进行分类,并比较它们由不同类型标签培训的DNNN的性能。当我们随机抽样抽样抽样对DNNN的一小部分(例如,10 % ) 或翻翻一大部分(例如,90 % ) 来训练DNNNN的地铁标签,它们的分解性能大体上保持不变。这表明DNNNNN在监督的分解培训中学会隐藏的结构,而不是像素等级标签。我们称之为这些隐藏的结构元结构。当我们随机的D结构被用于训练DNNNN的少量分解,它们在特征提取和分解过程中的表现和分解会持续地变。我们所观察到的数学结构的数学分解。我们进制的数学分解。