Fine-tuning a deep network trained with the standard cross-entropy loss is a strong baseline for few-shot learning. When fine-tuned transductively, this outperforms the current state-of-the-art on standard datasets such as Mini-ImageNet, Tiered-ImageNet, CIFAR-FS and FC-100 with the same hyper-parameters. The simplicity of this approach enables us to demonstrate the first few-shot learning results on the ImageNet-21k dataset. We find that using a large number of meta-training classes results in high few-shot accuracies even for a large number of few-shot classes. We do not advocate our approach as the solution for few-shot learning, but simply use the results to highlight limitations of current benchmarks and few-shot protocols. We perform extensive studies on benchmark datasets to propose a metric that quantifies the "hardness" of a few-shot episode. This metric can be used to report the performance of few-shot algorithms in a more systematic way.
翻译:精细调整一个受过标准跨热带损失训练的深网络,是几分学习的坚实基准。 当微调转导时,这优于目前有关标准数据集的先进水平,例如迷你-图像网、铁丝网-图像网、CIFAR-FS和FC-100, 使用同样的超参数。 这种方法的简单性使我们能够在图像网- 21k数据集上展示第一批几分的学习结果。 我们发现,使用大量的元培训班甚至对大量几分的类都会导致高几分的精度。 我们并不提倡将我们的方法作为几分数学习的解决方案,而只是使用这些结果来突出当前基准和几分协议的局限性。 我们对基准数据集进行了广泛的研究,以提出一个能量化一小集的“硬性”指标。 这个指标可以用来更系统地报告几分算法的绩效。