Transfer learning with pre-trained neural networks is a common strategy for training classifiers in medical image analysis. Without proper channel selections, this often results in unnecessarily large models that hinder deployment and explainability. In this paper, we propose a novel approach to efficiently build small and well performing networks by introducing the channel-scaling layers. A channel-scaling layer is attached to each frozen convolutional layer, with the trainable scaling weights inferring the importance of the corresponding feature channels. Unlike the fine-tuning approaches, we maintain the weights of the original channels and large datasets are not required. By imposing L1 regularization and thresholding on the scaling weights, this framework iteratively removes unnecessary feature channels from a pre-trained model. Using an ImageNet pre-trained VGG16 model, we demonstrate the capabilities of the proposed framework on classifying opacity from chest X-ray images. The results show that we can reduce the number of parameters by 95% while delivering a superior performance.
翻译:由受过训练的神经网络进行传输学习是培训医学图像分析分类人员的一个共同战略。 没有适当的频道选择,这往往会导致不必要的大型模型,阻碍部署和解释。在本文中,我们提出一种新的方法,通过引入频道缩放层,高效率地建立小型和功能良好的网络。每个冷冻的卷层都配有一条带宽层,用可训练的缩放权重来推断相应的特征频道的重要性。与微调方法不同,我们不需要保留原始频道和大型数据集的重量。通过对缩放权重施加L1正规化和阈值,这个框架反复地从预先培训的模型中去除不必要的特征通道。我们使用经过预先训练的VGG16模型,展示了拟议框架对胸部X光图像的不透明性进行分类的能力。结果显示,我们可以在交付优异性性功能的同时将参数减少95%。