Transfer learning enables solving a specific task having limited data by using the pre-trained deep networks trained on large-scale datasets. Typically, while transferring the learned knowledge from source task to the target task, the last few layers are fine-tuned (re-trained) over the target dataset. However, these layers are originally designed for the source task that might not be suitable for the target task. In this paper, we introduce a mechanism for automatically tuning the Convolutional Neural Networks (CNN) for improved transfer learning. The pre-trained CNN layers are tuned with the knowledge from target data using Bayesian Optimization. First, we train the final layer of the base CNN model by replacing the number of neurons in the softmax layer with the number of classes involved in the target task. Next, the pre-trained CNN is tuned automatically by observing the classification performance on the validation data (greedy criteria). To evaluate the performance of the proposed method, experiments are conducted on three benchmark datasets, e.g., CalTech-101, CalTech-256, and Stanford Dogs. The classification results obtained through the proposed AutoTune method outperforms the standard baseline transfer learning methods over the three datasets by achieving $95.92\%$, $86.54\%$, and $84.67\%$ accuracy over CalTech-101, CalTech-256, and Stanford Dogs, respectively. The experimental results obtained in this study depict that tuning of the pre-trained CNN layers with the knowledge from the target dataset confesses better transfer learning ability. The source codes are available at https://github.com/JekyllAndHyde8999/AutoTune_CNN_TransferLearning.
翻译:传输学习能够通过使用经过事先训练的大型数据集深层网络解决数据有限的特定任务。 通常, 在将从源任务到目标任务之间学到的知识从源任务到目标任务之间转移的过程中, 最后几个层次在目标数据集上经过微调( 重新培训) 。 但是, 这些层次最初是为源任务设计的, 可能不适合目标任务。 在本文件中, 我们引入了自动调整革命神经网络( CNN) 的机制, 以改进传输学习。 受过训练的CNN 层通过使用Bayesian Optim化的目标数据的知识对经训练的数据进行调整。 首先, 我们培训了基准CNN模型的最后一层, 替换了软模层神经细胞的数量, 与目标任务任务所涉及的类别数量。 然而, 受过预先训练的CNNCM通过观察验证数据的分类性能自动调整( 质量标准标准标准标准 ) 。 为了评估拟议方法的性能, 在三个基准数据集上进行了测试, 例如, Caltech- 101, Caltech- 256, 和 Stard Dogard_T.