The scarcity of labeled data often impedes the application of deep learning to the segmentation of medical images. Semi-supervised learning seeks to overcome this limitation by leveraging unlabeled examples in the learning process. In this paper, we present a novel semi-supervised segmentation method that leverages mutual information (MI) on categorical distributions to achieve both global representation invariance and local smoothness. In this method, we maximize the MI for intermediate feature embeddings that are taken from both the encoder and decoder of a segmentation network. We first propose a global MI loss constraining the encoder to learn an image representation that is invariant to geometric transformations. Instead of resorting to computationally-expensive techniques for estimating the MI on continuous feature embeddings, we use projection heads to map them to a discrete cluster assignment where MI can be computed efficiently. Our method also includes a local MI loss to promote spatial consistency in the feature maps of the decoder and provide a smoother segmentation. Since mutual information does not require a strict ordering of clusters in two different assignments, we incorporate a final consistency regularization loss on the output which helps align the cluster labels throughout the network. We evaluate the method on three challenging publicly-available datasets for medical image segmentation. Experimental results show our method to outperform recently-proposed approaches for semi-supervised segmentation and provide an accuracy near to full supervision while training with very few annotated images
翻译:标签数据稀缺往往妨碍对医学图像的分解进行深层学习。 半监督学习试图通过利用学习过程中未贴标签的例子来克服这一限制。 在本文中,我们展示了一种新的半监督分解方法,在绝对分布上利用共同信息(MI)实现全球代表性差异和地方平滑。在这个方法中,我们最大限度地利用MI用于从分解网络的编码和分解图中提取的中间特征嵌入。我们首先提议全球MI损失限制编码器学习一种不易几何转换的图像表达方式。我们不使用计算成本技术来估计持续特征嵌入的MI,而是使用投影头来将它们映射成一个离散的集群任务,从而可以有效地计算出MI。我们的方法还包括当地MI损失,以促进解码器特性图中的空间一致性,并提供更平稳的分解。由于相互信息不需要两次不同任务中严格排列组合,因此我们将最终的一致化培训损失纳入到接近几何度转换转换的转换过程。 我们用最后的一致性调整方法在三个分级结构上提供了一种最终的分解方法,我们最新的分解了整个数据。