Transfer learning from huge natural image datasets, fine-tuning of deep neural networks and the use of the corresponding pre-trained networks have become de facto the core of art analysis applications. Nevertheless, the effects of transfer learning are still poorly understood. In this paper, we first use techniques for visualizing the network internal representations in order to provide clues to the understanding of what the network has learned on artistic images. Then, we provide a quantitative analysis of the changes introduced by the learning process thanks to metrics in both the feature and parameter spaces, as well as metrics computed on the set of maximal activation images. These analyses are performed on several variations of the transfer learning procedure. In particular, we observed that the network could specialize some pre-trained filters to the new image modality and also that higher layers tend to concentrate classes. Finally, we have shown that a double fine-tuning involving a medium-size artistic dataset can improve the classification on smaller datasets, even when the task changes.
翻译:从巨大的自然图像数据集中学习的转移、深神经网络的微调以及相应的预先培训的网络的使用实际上已成为艺术分析应用的核心,然而,转移学习的效果仍然不甚为人知。在本文中,我们首先使用网络内部表现的视觉化技术,以便提供线索,使人们了解网络在艺术图像方面学到了什么。然后,我们对学习过程带来的变化进行了定量分析,这些变化是由于特征和参数空间的测量以及最大激活图像集的计算尺度造成的。这些分析是针对转移学习程序的若干变异进行的。特别是,我们发现,网络可以将一些预先培训过的过滤器专门用于新的图像模式,而且高层次往往集中各个班级。最后,我们显示,涉及中等规模艺术数据集的双重微调可以改进小数据集的分类,即使任务发生变化。