Semi-supervised learning has made significant strides in the medical domain since it alleviates the heavy burden of collecting abundant pixel-wise annotated data for semantic segmentation tasks. Existing semi-supervised approaches enhance the ability to extract features from unlabeled data with prior knowledge obtained from limited labeled data. However, due to the scarcity of labeled data, the features extracted by the models are limited in supervised learning, and the quality of predictions for unlabeled data also cannot be guaranteed. Both will impede consistency training. To this end, we proposed a novel uncertainty-aware scheme to make models learn regions purposefully. Specifically, we employ Monte Carlo Sampling as an estimation method to attain an uncertainty map, which can serve as a weight for losses to force the models to focus on the valuable region according to the characteristics of supervised learning and unsupervised learning. Simultaneously, in the backward process, we joint unsupervised and supervised losses to accelerate the convergence of the network via enhancing the gradient flow between different tasks. Quantitatively, we conduct extensive experiments on three challenging medical datasets. Experimental results show desirable improvements to state-of-the-art counterparts.
翻译:半监督的学习在医疗领域取得了长足进步,因为它减轻了为语义分离任务收集大量附加说明的数据的沉重负担。现有的半监督方法提高了从未经标记的数据中提取特征的能力,而事先从有限的标签数据中获得知识;然而,由于标签数据稀缺,模型所提取的特征在监督学习中是有限的,对未贴标签数据的预测质量也无法保证。两者都将妨碍一致性培训。为此,我们提出了一个新的不确定性意识计划,使模型有目的地学习区域。具体地说,我们采用蒙特卡洛取样作为一种估算方法,以获得不确定性地图,这可以作为损失的权重,迫使模型根据受监督学习和未经监督的学习的特征,集中关注有价值的区域。与此同时,在落后过程中,我们联合了未经监督和监督的损失,以通过加强不同任务之间的梯度流动来加速网络的融合。定量而言,我们在三个具有挑战性的医学数据集上进行了广泛的实验。实验结果显示,值得州-州-州-艺术对口单位进行实验的结果显示,值得改进。