When applying transfer learning for medical image analysis, downstream tasks often have significant gaps with the pre-training tasks. Previous methods mainly focus on improving the transferabilities of the pre-trained models to bridge the gaps. In fact, model fine-tuning can also play a very important role in tackling this problem. A conventional fine-tuning method is updating all deep neural networks (DNNs) layers by a single learning rate (LR), which ignores the unique transferabilities of different layers. In this work, we explore the behaviors of different layers in the fine-tuning stage. More precisely, we first hypothesize that lower-level layers are more domain-specific while higher-level layers are more task-specific, which is verified by a simple bi-directional fine-tuning scheme. It is harder for the pre-trained specific layers to transfer to new tasks than general layers. On this basis, to make different layers better co-adapt to the downstream tasks according to their transferabilities, a meta-learning-based LR learner, namely MetaLR, is proposed to assign LRs for each layer automatically. Extensive experiments on various medical applications (i.e., POCUS, BUSI, Chest X-ray, and LiTS) well confirm our hypothesis and show the superior performance of the proposed methods to previous state-of-the-art fine-tuning methods.
翻译:在应用传导学习来进行医学图像分析时,下游任务往往与培训前任务存在重大差距。以前的方法主要侧重于提高预培训模式的可转移性,以弥补差距。事实上,示范微调也可以在解决这一问题方面发挥非常重要的作用。传统的微调方法正在以单一学习率更新所有深神经网络层,这忽略了不同层次的独特可转移性。在这项工作中,我们探索了微调阶段不同层次的行为。更确切地说,我们首先假设低层次层更具有特定领域的特点,而较高层次则更具有具体任务的特点,这通过简单的双向微调计划加以验证。预先培训的具体层次比一般层次更难转移到新的任务。在此基础上,根据不同的层次根据各自的可转移性更好地共同适应下游任务,一个基于元学习的LAC学习者,即MetLRR,建议为每个层次自动分配LR,而较高层次的层次则更适合具体的任务,而较高层次的层次则由简单的双向微调计划加以验证。关于各种医学应用的大规模实验,先期的SITS(i、PUS) 和高级的Siral-sirst 展示我们以前的Sirst 的高级方法。