In deep learning, transfer learning (TL) has become the de facto approach when dealing with image related tasks. Visual features learnt for one task have been shown to be reusable for other tasks, improving performance significantly. By reusing deep representations, TL enables the use of deep models in domains with limited data availability, limited computational resources and/or limited access to human experts. Domains which include the vast majority of real-life applications. This paper conducts an experimental evaluation of TL, exploring its trade-offs with respect to performance, environmental footprint, human hours and computational requirements. Results highlight the cases were a cheap feature extraction approach is preferable, and the situations where an expensive fine-tuning effort may be worth the added cost. Finally, a set of guidelines on the use of TL are proposed.
翻译:在深层学习中,转移学习(TL)已成为处理与图像有关的任务的实际方法,为一项任务所学的视觉特征被证明可以重新用于其他任务,大大提高了业绩;通过重新使用深度表述,TL能够在数据有限、计算资源有限和(或)与人类专家接触有限的领域使用深层模型;包括绝大多数实际应用在内的领域;本文件对TL进行试验性评价,探讨其在业绩、环境足迹、人时数和计算要求方面的取舍;结果突出表明,案例是廉价的特征提取方法,而且费用昂贵的微调工作可能值得增加费用;最后,提出了一套关于使用TL的准则。