Disease diagnosis from medical images via supervised learning is usually dependent on tedious, error-prone, and costly image labeling by medical experts. Alternatively, semi-supervised learning and self-supervised learning offer effectiveness through the acquisition of valuable insights from readily available unlabeled images. We present Semi-Supervised Relational Contrastive Learning (SRCL), a novel semi-supervised learning model that leverages self-supervised contrastive loss and sample relation consistency for the more meaningful and effective exploitation of unlabeled data. Our experimentation with the SRCL model explores both pre-train/fine-tune and joint learning of the pretext (contrastive learning) and downstream (diagnostic classification) tasks. We validate against the ISIC 2018 Challenge benchmark skin lesion classification dataset and demonstrate the effectiveness of our semi-supervised method on varying amounts of labeled data.
翻译:疾病诊断主要依赖于医学专家费力、易错、费用高的图像标注。半监督学习和自监督学习通过获取现成的无标签图像的有价值信息提高了诊断效果。本文提出了半监督关系对比学习(SRCL),这是一种新型的半监督学习模型,利用了自监督对比损失和样本关系一致性,更有效地利用了无标签数据。我们在SRCL模型中探讨了预训练/微调和前置任务(对比学习)和下游任务(诊断分类)的联合学习。我们针对ISIC 2018 Challenge基准皮损分类数据集进行验证,并演示了我们半监督方法在不同数量的标记数据上的有效性。