Transfer learning (TL) from pretrained deep models is a standard practice in modern medical image classification (MIC). However, what levels of features to be reused are problem-dependent, and uniformly finetuning all layers of pretrained models may be suboptimal. This insight has partly motivated the recent \emph{differential} TL strategies, such as TransFusion (TF) and layer-wise finetuning (LWFT), which treat the layers in the pretrained models differentially. In this paper, we add one more strategy into this family, called \emph{TruncatedTL}, which reuses and finetunes appropriate bottom layers and directly discards the remaining layers. This yields not only superior MIC performance but also compact models for efficient inference, compared to other differential TL methods. We validate the performance and model efficiency of TruncatedTL on three MIC tasks covering both 2D and 3D images. For example, on the BIMCV COVID-19 classification dataset, we obtain improved performance with around $1/4$ model size and $2/3$ inference time compared to the standard full TL model. Code is available at https://github.com/sun-umn/Transfer-Learning-in-Medical-Imaging.
翻译:从经过预先训练的深层模型中传授学习(TL)是现代医学图像分类(MIC)的标准做法。然而,哪些特性水平需要再利用,取决于问题,统一微调所有经过训练的模型层可能不够理想。这种洞察力部分地推动了最近的TL战略,如TransFusion(TF)和分层微调(LWFT),这些战略以不同的方式处理预先训练的模型中的层。在本文中,我们在这个家族中增加了一个战略,称为\emph{TruncatedTL},即重新利用和微调适当的底层和微调,直接抛弃其余的层。这与其它差别TL方法相比,不仅产生较高的MIC性能,而且还产生高效推断的紧凑模型。我们验证了TruncedTL在涉及2D和3D图像的三个MIC任务方面的性能和模型效率。例如BIMCV COVID-19分类数据集,我们得到了改进的性能,大约1/4美元的模型大小和2.3/unference/imes。Misional commaxalalalal am-commal codeal 。在可得到的完整TL.Mismaxal am-modeal exm-modeal exm-modeal exm-modeal sal adal am-modeal-mode.