The use of meta-learning and transfer learning in the task of few-shot image classification is a well researched area with many papers showcasing the advantages of transfer learning over meta-learning in cases where data is plentiful and there is no major limitations to computational resources. In this paper we will showcase our experimental results from testing various state-of-the-art transfer learning weights and architectures versus similar state-of-the-art works in the meta-learning field for image classification utilizing Model-Agnostic Meta Learning (MAML). Our results show that both practices provide adequate performance when the dataset is sufficiently large, but that they both also struggle when data sparsity is introduced to maintain sufficient performance. This problem is moderately reduced with the use of image augmentation and the fine-tuning of hyperparameters. In this paper we will discuss: (1) our process of developing a robust multi-class convolutional neural network (CNN) for the task of few-shot image classification, (2) demonstrate that transfer learning is the superior method of helping create an image classification model when the dataset is large and (3) that MAML outperforms transfer learning in the case where data is very limited. The code is available here: github.com/JBall1/Few-Shot-Limited-Data
翻译:使用元学习和转移学习来完成微小图像分类任务是一个研究良好的领域,许多论文都展示了在数据丰富且计算资源没有重大限制的情况下,将学习转移到元学习的优点,而在数据丰富且计算资源没有重大限制的情况下,将采用元学习和转移学习的学习和转移学习作为完成微小图像分类任务的元学习领域各种最先进的教学加权和结构与类似最先进的神经工程的试验结果。我们的成果显示,在数据集足够大的情况下,这两种做法都提供了适当的性能,但是在数据宽度用于保持充分性能时,它们也都挣扎着。在使用图像增强和超参数的微调的情况下,这一问题略有减少。我们将在本文中讨论:(1) 我们开发一个强大的多级革命神经网络,以完成微小图像分类的任务,(2) 显示传输学习是帮助创建图像分类模型的优劣方法,(3) 当数据集大和(3) 数据宽度时,MAMML-ML-ML-超模的代码在此案例中学习数据。