This work presents a novel self-supervised pre-training method to learn efficient representations without labels on histopathology medical images utilizing magnification factors. Other state-of-theart works mainly focus on fully supervised learning approaches that rely heavily on human annotations. However, the scarcity of labeled and unlabeled data is a long-standing challenge in histopathology. Currently, representation learning without labels remains unexplored for the histopathology domain. The proposed method, Magnification Prior Contrastive Similarity (MPCS), enables self-supervised learning of representations without labels on small-scale breast cancer dataset BreakHis by exploiting magnification factor, inductive transfer, and reducing human prior. The proposed method matches fully supervised learning state-of-the-art performance in malignancy classification when only 20% of labels are used in fine-tuning and outperform previous works in fully supervised learning settings. It formulates a hypothesis and provides empirical evidence to support that reducing human-prior leads to efficient representation learning in self-supervision. The implementation of this work is available online on GitHub - https://github.com/prakashchhipa/Magnification-Prior-Self-Supervised-Method
翻译:这项工作提出了一种新的自我监督培训前方法,以利用放大系数在组织病理医学图象上不贴标签地学习高效的表述方法。其他最先进的工作主要侧重于完全监督的学习方法,这些方法在很大程度上依赖人类的注释。然而,标签和未贴标签的数据稀缺是生理病理学中长期存在的挑战。目前,没有标签的代表学习仍未为组织病理学领域探索。拟议的方法“放大比照前相近性(MPCS)”,通过利用放大系数、诱导性转移和减少人类的先质,使自我监督的在小规模乳腺癌数据集中不贴标签地学习。拟议方法与完全监督的恶性分类中受监督的状态学习表现相匹配,因为只有20%的标签用于微调和优于完全监督的学习环境中的先前工作。它提出了一个假设,并提供了经验证据,以支持减少人类主要特征导致在自我监督的图像中有效学习。这项工作的实施可以在GitHub-https-Maphir-Mifrigas/Mithprafrigarigistration.