Brain tumor segmentation is critical in diagnosis and treatment planning for the disease. Yet, current deep learning methods rely on centralized data collection, which raises privacy concerns and limits generalization across diverse institutions. In this paper, we propose TwinSegNet, which is a privacy-preserving federated learning framework that integrates a hybrid ViT-UNet model with personalized digital twins for accurate and real-time brain tumor segmentation. Our architecture combines convolutional encoders with Vision Transformer bottlenecks to capture local and global context. Each institution fine-tunes the global model of private data to form its digital twin. Evaluated on nine heterogeneous MRI datasets, including BraTS 2019-2021 and custom tumor collections, TwinSegNet achieves high Dice scores (up to 0.90%) and sensitivity/specificity exceeding 90%, demonstrating robustness across non-independent and identically distributed (IID) client distributions. Comparative results against centralized models such as TumorVisNet highlight TwinSegNet's effectiveness in preserving privacy without sacrificing performance. Our approach enables scalable, personalized segmentation for multi-institutional clinical settings while adhering to strict data confidentiality requirements.
翻译:脑肿瘤分割在疾病诊断和治疗规划中至关重要。然而,当前深度学习方法依赖于集中式数据收集,这引发了隐私担忧并限制了跨不同机构的泛化能力。本文提出TwinSegNet,这是一种保护隐私的联邦学习框架,它将混合ViT-UNet模型与个性化数字孪生相结合,以实现准确、实时的脑肿瘤分割。我们的架构结合了卷积编码器与Vision Transformer瓶颈模块,以捕捉局部和全局上下文信息。每个机构在私有数据上微调全局模型以形成其数字孪生。在九个异构MRI数据集(包括BraTS 2019-2021和自定义肿瘤数据集)上的评估表明,TwinSegNet实现了较高的Dice分数(最高达0.90%)以及超过90%的敏感性与特异性,证明了其在非独立同分布(non-IID)客户端数据分布下的鲁棒性。与TumorVisNet等集中式模型的对比结果突显了TwinSegNet在保护隐私的同时不牺牲性能的有效性。我们的方法能够在遵守严格数据保密要求的前提下,为多机构临床环境提供可扩展的个性化分割解决方案。