Due to data privacy constraints, data sharing among multiple centers is restricted. Continual learning, as one approach to peer-to-peer federated learning, can promote multicenter collaboration on deep learning algorithm development by sharing intermediate models instead of training data. This work aims to investigate the feasibility of continual learning for multicenter collaboration on an exemplary application of brain metastasis identification using DeepMedic. 920 T1 MRI contrast enhanced volumes are split to simulate multicenter collaboration scenarios. A continual learning algorithm, synaptic intelligence (SI), is applied to preserve important model weights for training one center after another. In a bilateral collaboration scenario, continual learning with SI achieves a sensitivity of 0.917, and naive continual learning without SI achieves a sensitivity of 0.906, while two models trained on internal data solely without continual learning achieve sensitivity of 0.853 and 0.831 only. In a seven-center multilateral collaboration scenario, the models trained on internal datasets (100 volumes each center) without continual learning obtain a mean sensitivity value of 0.699. With single-visit continual learning (i.e., the shared model visits each center only once during training), the sensitivity is improved to 0.788 and 0.849 without SI and with SI, respectively. With iterative continual learning (i.e., the shared model revisits each center multiple times during training), the sensitivity is further improved to 0.914, which is identical to the sensitivity using mixed data for training. Our experiments demonstrate that continual learning can improve brain metastasis identification performance for centers with limited data. This study demonstrates the feasibility of applying continual learning for peer-to-peer federated learning in multicenter collaboration.
翻译:由于数据隐私的限制,多中心之间的数据共享受到限制。持续学习,作为同行对同行联合学习的一种方法,通过共享中间模型而不是培训数据,可以促进深学习算法开发的多中心协作,通过共享中间模型而不是培训数据,可以促进深学习算法开发的多中心协作;这项工作旨在调查在利用深 Medic进行脑转移识别示范应用的多中心协作方面不断学习的可行性。 920 T1 MRI 对比增强的体积被分割成模拟多中心协作情景。持续学习算法,即合成智能(SI)应用持续学习算法,以保持一个中心之间的重要模型的敏感性。在双边协作设想中,与SI的连续学习具有0.917的敏感性,而没有SIS的天性持续学习没有达到0.906的敏感性,而仅仅在没有持续学习的情况下接受内部数据培训的两个模型则具有0.853和0.831的敏感性。在7个多边协作假设中,在不连续学习的情况下,对内部数据集(每个中心100卷)进行持续学习的模型可以达到0.69999的平均值。在进行一次访问中进行(即共享的模拟访问,每个中心只有每中心一次进行模拟访问,在学习期间进行连续学习,然后进行连续学习,对SIR的敏感度学习中进行。