Nowadays, deep learning methods with large-scale datasets can produce clinically useful models for computer-aided diagnosis. However, the privacy and ethical concerns are increasingly critical, which make it difficult to collect large quantities of data from multiple institutions. Federated Learning (FL) provides a promising decentralized solution to train model collaboratively by exchanging client models instead of private data. However, the server aggregation of existing FL methods is observed to degrade the model performance in real-world medical FL setting, which is termed as retrogress. To address this problem, we propose a personalized retrogress-resilient framework to produce a superior personalized model for each client. Specifically, we devise a Progressive Fourier Aggregation (PFA) at the server to achieve more stable and effective global knowledge gathering by integrating client models from low-frequency to high-frequency gradually. Moreover, with an introduced deputy model to receive the aggregated server model, we design a Deputy-Enhanced Transfer (DET) strategy at the client and conduct three steps of Recover-Exchange-Sublimate to ameliorate the personalized local model by transferring the global knowledge smoothly. Extensive experiments on real-world dermoscopic FL dataset prove that our personalized retrogress-resilient framework outperforms state-of-the-art FL methods, as well as the generalization on an out-of-distribution cohort. The code and dataset are available at https://github.com/CityU-AIM-Group/PRR-FL.
翻译:目前,具有大规模数据集的深层次学习方法可以产生临床上有用的计算机辅助诊断模型,然而,隐私和伦理问题越来越重要,因此难以从多个机构收集大量数据。联邦学习联合会(FL)提供了一个大有希望的分散化解决方案,通过交流客户模式而不是私人数据,合作培训模型;然而,观察到现有FL方法的服务器组合可以降低真实世界医疗FL环境中的模型性能,称之为倒退。为解决这一问题,我们提议一个个性化回溯-弹性框架,为每个客户制作一个更优越的个性化模型。具体地说,我们在服务器上设计了一个进步的四重力聚合(FFFA),通过将客户模式从低频到高频逐步整合,实现更稳定、更有效的全球知识收集。此外,通过采用一个副模型来接收综合服务器模型,我们在客户中设计了一个副强化传输(DFAFLET)战略,并采取了三个步骤来改善个人化的本地模型,方法是平稳地传输全球知识。在FGRA-LR(FR)中,在真实的数据-LRis-R(Fral-Restal)中进行广泛的实验。