The collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis. Our method introduces a novel Transformer-based self-supervised pre-training paradigm that pre-trains models directly on decentralized target task datasets using masked image modeling, to facilitate more robust representation learning on heterogeneous data and effective knowledge transfer to downstream models. Extensive empirical results on simulated and real-world medical imaging non-IID federated datasets show that masked image modeling with Transformers significantly improves the robustness of models against various degrees of data heterogeneity. Notably, under severe data heterogeneity, our method, without relying on any additional pre-training data, achieves an improvement of 5.06%, 1.53% and 4.58% in test accuracy on retinal, dermatology and chest X-ray classification compared to the supervised baseline with ImageNet pre-training. In addition, we show that our federated self-supervised pre-training methods yield models that generalize better to out-of-distribution data and perform more effectively when fine-tuning with limited labeled data, compared to existing FL algorithms. The code is available at https://github.com/rui-yan/SSL-FL.
翻译:多机构大规模医疗数据集的收集和校正对于培训准确的深层学习模式至关重要,但隐私问题往往阻碍数据共享。 联邦学习(FL)是一个很有希望的解决方案,它使不同机构之间能够保持隐私的合作学习,但通常由于数据分布不一,缺乏质量标签数据而导致业绩恶化。 在本文中,我们展示了一个强有力的、标签效率高的医学图像分析自监管FL框架。我们的方法引入了一个新的基于变压器的自我监督培训前自监督的模型,这种模式直接用于使用掩码图像模型的分散目标任务数据集。 联邦学习(FL)是一个很有希望的解决方案,它能够促进不同机构之间更有力地进行关于差异性数据和有效的知识传输到下游模型的展示学习。 模拟和真实世界医学成像的非IID联邦化数据集的广泛经验显示,与变压器的掩码模型大大改善了模型相对于不同程度的数据前高密度的稳健性。 值得注意的是,在严重数据易变压模型下,我们的方法,在不依赖任何额外的培训前的精确度数据模型的情况下,直接进行,在5.06 % 和4.58%的测试中实现了自我测试。