Modern classification models tend to struggle when the amount of annotated data is scarce. To overcome this issue, several neural few-shot classification models have emerged, yielding significant progress over time, both in Computer Vision and Natural Language Processing. In the latter, such models used to rely on fixed word embeddings before the advent of transformers. Additionally, some models used in Computer Vision are yet to be tested in NLP applications. In this paper, we compare all these models, first adapting those made in the field of image processing to NLP, and second providing them access to transformers. We then test these models equipped with the same transformer-based encoder on the intent detection task, known for having a large number of classes. Our results reveal that while methods perform almost equally on the ARSC dataset, this is not the case for the Intent Detection task, where the most recent and supposedly best competitors perform worse than older and simpler ones (while all are given access to transformers). We also show that a simple baseline is surprisingly strong. All the new developed models, as well as the evaluation framework, are made publicly available.
翻译:现代分类模型往往在附加说明的数据数量稀少时难以使用。 为了克服这一问题,出现了几个神经少发的分类模型,在计算机视野和自然语言处理方面都取得了长时期的显著进展。在后者中,这些模型用来依赖变压器出现之前的固定字嵌入。此外,计算机视野中使用的一些模型尚未在NLP应用程序中测试。在本文中,我们比较所有这些模型,首先将图像处理领域的模型调整为NLP,其次提供进入变压器的机会。然后,我们测试这些模型,在意图探测任务上安装了以变压器为基础的编码器,以大量类别著称。我们的结果显示,虽然在变压器数据集上几乎同样使用方法,但“内向式探测”任务却并非如此,因为最近期和最好竞争者的表现比变压器更差,更简单(尽管所有竞争者都有机会进入变压器),我们还表明一个简单的基线是惊人的。所有新开发的模型以及评价框架都公开提供。