Learning discriminative representations of unlabelled data is a challenging task. Contrastive self-supervised learning provides a framework to learn meaningful representations using learned notions of similarity measures from simple pretext tasks. In this work, we propose a simple and efficient framework for self-supervised image segmentation using contrastive learning on image patches, without using explicit pretext tasks or any further labeled fine-tuning. A fully convolutional neural network (FCNN) is trained in a self-supervised manner to discern features in the input images and obtain confidence maps which capture the network's belief about the objects belonging to the same class. Positive- and negative- patches are sampled based on the average entropy in the confidence maps for contrastive learning. Convergence is assumed when the information separation between the positive patches is small, and the positive-negative pairs is large. The proposed model only consists of a simple FCNN with 10.8k parameters and requires about 5 minutes to converge on the high resolution microscopy datasets, which is orders of magnitude smaller than the relevant self-supervised methods to attain similar performance. We evaluate the proposed method for the task of segmenting nuclei from two histopathology datasets, and show comparable performance with relevant self-supervised and supervised methods.
翻译:在这项工作中,我们提出一个简单而高效的自我监督图像分割框架,在图像补丁上进行对比性学习,同时不使用明确的借口任务或任何进一步的标签微调。完全进化神经网络(FCNN)以自我监督的方式接受培训,以辨别输入图像中的特征,并获得能捕捉网络对属于同一类的物体的信念的信任地图。正和负的补丁是根据信任图中用于对比学习的平均通缩度抽样的。当正面补丁之间的信息分离很小,而正反向配对又很大时,就假设一致。提议的模型仅包括一个带有10.8k参数的简单FCNN,需要大约5分钟的时间来汇集高分辨率微镜数据集,该数据集的规模小于相关自我监督的缩微镜。我们从两个可比较性能分析任务中评估了与可比较性能的两段。我们从相关的数据分析到可比较性能,我们从两个方面评估了拟议的方法。