Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology. However, obtaining such exhaustive manual annotations is often expensive, laborious, and prone to inter and Intra-observer variability. While recent self-supervised and semi-supervised methods can alleviate this need by learn-ing unsupervised feature representations, they still struggle to generalize well to downstream tasks when the number of labeled instances is small. In this work, we overcome this challenge by leveraging both task-agnostic and task-specific unlabeled data based on two novel strategies: i) a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning; ii) a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data. We carry out extensive validation experiments on three histopathology benchmark datasets across two classification and one regression-based tasks, i.e., tumor metastasis detection, tissue type classification, and tumor cellularity quantification. Under limited-label data, the proposed method yields tangible improvements, which is close or even outperforming other state-of-the-art self-supervised and supervised baselines. Furthermore, we empirically show that the idea of bootstrapping the self-supervised pretrained features is an effective way to improve the task-specific semi-supervised learning on standard benchmarks. Code and pretrained models will be made available at: https://github.com/srinidhiPY/SSL_CR_Histo


翻译:具有大量标签的数据集的神经网络培训仍然是计算生理病理学中的主要模式。然而,获得此类详尽的手动说明往往费用昂贵、艰苦、容易发生内部和内部观察者的变异性。虽然最近的自监管和半监管方法可以通过学习不受监督的特征演示来缓解这一需求,但在标签实例数量少时,它们仍然难以向下游任务全面推广。在这项工作中,我们通过利用基于以下两个新战略的P-SS型和特定任务无标签数据克服了这一挑战:i)自监管的托辞任务,利用其基础的多分辨率背景提示在组织整体和内部观察者的图像中学习一个强大的监督信号,以便进行不受监督的演示;ii)新的教师-学生半监管一致性模式,在标签实例数量少时,它们学会将预先测试的表述有效地转移到下游任务。我们根据以下两个新的战略,即i) 自我监督的自我监督的借口任务基准模型的自我测试,甚至通过两个系统内部分析的自我评估的自我测试和一个系统化的自我分析工具,在测试中,在测试中,在测试和精确的排序中,在排序中,在排序中,在排序中,在排序中,在排序和排序中,在排序中,在排序中,在排序中,在排序中,在排序中,在排序中,在排序和排序中,在排序中,在排序中,在排序中,我们。

0
下载
关闭预览

相关内容

iOS 8 提供的应用间和应用跟系统的功能交互特性。
  • Today (iOS and OS X): widgets for the Today view of Notification Center
  • Share (iOS and OS X): post content to web services or share content with others
  • Actions (iOS and OS X): app extensions to view or manipulate inside another app
  • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
  • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
  • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
  • Custom Keyboard (iOS): system-wide alternative keyboards

Source: iOS 8 Extensions: Apple’s Plan for a Powerful App Ecosystem
专知会员服务
35+阅读 · 2021年7月7日
最新《自监督表示学习》报告,70页ppt
专知会员服务
85+阅读 · 2020年12月22日
多标签学习的新趋势(2020 Survey)
专知会员服务
41+阅读 · 2020年12月6日
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
164+阅读 · 2020年3月18日
已删除
将门创投
4+阅读 · 2019年9月10日
Hierarchically Structured Meta-learning
CreateAMind
26+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
27+阅读 · 2019年5月18日
无监督元学习表示学习
CreateAMind
27+阅读 · 2019年1月4日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
美国化学会 (ACS) 北京代表处招聘
知社学术圈
11+阅读 · 2018年9月4日
VIP会员
Top
微信扫码咨询专知VIP会员