While advances in pre-training have led to dramatic improvements in few-shot learning of NLP tasks, there is limited understanding of what drives successful few-shot adaptation in datasets. In particular, given a new dataset and a pre-trained model, what properties of the dataset make it \emph{few-shot learnable} and are these properties independent of the specific adaptation techniques used? We consider an extensive set of recent few-shot learning methods, and show that their performance across a large number of datasets is highly correlated, showing that few-shot hardness may be intrinsic to datasets, for a given pre-trained model. To estimate intrinsic few-shot hardness, we then propose a simple and lightweight metric called "Spread" that captures the intuition that few-shot learning is made possible by exploiting feature-space invariances between training and test samples. Our metric better accounts for few-shot hardness compared to existing notions of hardness, and is ~8-100x faster to compute.
翻译:虽然培训前的进展导致对NLP任务的微小学习有了显著改进,但对于是什么驱动数据集成功进行微小的调整,了解有限。特别是,考虑到新的数据集和预先培训的模式,数据集的属性使它可以学习,这些属性是否独立于所使用的具体适应技术之外?我们考虑了一系列广泛的最近微小的学习方法,并表明它们在大量数据集中的性能高度相关,表明对某一预先培训的模型来说,很少的显示硬性可能是数据集的内在的。为了估计本身的微小的硬性,我们然后提出一个简单和轻量的衡量标准,称为“Spread”,它能捕捉到通过利用训练与测试样品之间的地貌差异而使微小的学习成为可能的直觉。我们用比现有的硬性概念更好地描述微小的硬性,而且速度为~8-100x。