Few-shot learning arises in important practical scenarios, such as when a natural language understanding system needs to learn new semantic labels for an emerging, resource-scarce domain. In this paper, we explore retrieval-based methods for intent classification and slot filling tasks in few-shot settings. Retrieval-based methods make predictions based on labeled examples in the retrieval index that are similar to the input, and thus can adapt to new domains simply by changing the index without having to retrain the model. However, it is non-trivial to apply such methods on tasks with a complex label space like slot filling. To this end, we propose a span-level retrieval method that learns similar contextualized representations for spans with the same label via a novel batch-softmax objective. At inference time, we use the labels of the retrieved spans to construct the final structure with the highest aggregated score. Our method outperforms previous systems in various few-shot settings on the CLINC and SNIPS benchmarks.
翻译:在一些重要的实际假设中,例如当自然语言理解系统需要为新兴的资源隔离域学习新的语义标签时,会出现少见的学习。在本文中,我们探索了以检索为基础的方法,以在几个发件的设置中进行意向分类和空档填充任务。基于检索的方法根据检索索引中与输入相似的标签示例作出预测,从而可以仅仅通过改变索引而适应新的域,而不必再对模型进行再培训。然而,在诸如填充空档等复杂标签空间的任务中应用这种方法并非三重性。为此,我们提出一个跨级检索方法,通过新的批量软化目标来学习与同一标签相类似的背景显示。在推断时,我们使用检索区域中的标签来构建最后结构,总得分最高。我们的方法超越了CLINC和SNIPS基准中不同几发设置的系统。