Semantic segmentation of breast cancer metastases in histopathological slides is a challenging task. In fact, significant variation in data characteristics of histopathology images (domain shift) make generalization of deep learning to unseen data difficult. Our goal is to address this challenge by using a conditional Fully Convolutional Network (co-FCN) whose output can be conditioned at run time, and which can improve its performance when a properly selected set of reference slides are used to condition the output. We adapted to our task a co-FCN originally applied to organs segmentation in volumetric medical images and we trained it on the Whole Slide Images (WSIs) from three out of five medical centers present in the CAMELYON17 dataset. We tested the performance of the network on the WSIs of the remaining centers. We also developed an automated selection strategy for selecting the conditioning subset, based on an unsupervised clustering process applied to a target-specific set of reference patches, followed by a selection policy that relies on the cluster similarities with the input patch. We benchmarked our proposed method against a U-Net trained on the same dataset with no conditioning. The conditioned network shows better performance that the U-Net on the WSIs with Isolated Tumor Cells and micro-metastases from the medical centers used as test. Our contributions are an architecture which can be applied to the histopathology domain and an automated procedure for the selection of conditioning data.
翻译:在组织病理学幻灯片中,乳腺癌的语义分解是具有挑战性的任务。事实上,组织病理图象的数据特征(部位移)在数据特征上存在显著差异,使得深层学习和隐蔽数据难以概括。我们的目标是通过使用有条件的完全革命网络来应对这一挑战,其产出可在运行时以运行时间为条件,而且当使用一套适当选择的参考幻灯片来设定产出时,这可以提高它的性能。我们根据我们的任务调整了一个共同的FCN,最初应用于体积医学图象中的器官分解,我们从CAMELYON17数据集的五个医疗中心中的三个中心中,在整体幻灯片图象(SWISI)上培训它。我们测试了其余中心的网络全面革命网络的性能。我们还开发了一套自动选择战略,用于选择一个特定指标集的参考补丁,随后又制定了一种选择政策,该选择政策依赖于与输入补码的集群的相似性能。我们提出的方法比在U-Net上应用的全域图象图像(SSI)是用于同一网络测试中心的系统测试中心的不使用性能测试中心的UNet,从而显示其状态的U-stal-stal-stal ASet ASet IM 程序可以显示其运行状态。