Text classification struggles to generalize to unseen classes with very few labeled text instances per class. In such a few-shot learning (FSL) setting, metric-based meta-learning approaches have shown promising results. Previous studies mainly aim to derive a prototype representation for each class. However, they neglect that it is challenging-yet-unnecessary to construct a compact representation which expresses the entire meaning for each class. They also ignore the importance to capture the inter-dependency between query and the support set for few-shot text classification. To deal with these issues, we propose a meta-learning based method MGIMN which performs instance-wise comparison followed by aggregation to generate class-wise matching vectors instead of prototype learning. The key of instance-wise comparison is the interactive matching within the class-specific context and episode-specific context. Extensive experiments demonstrate that the proposed method significantly outperforms the existing state-of-the-art approaches, under both the standard FSL and generalized FSL settings.
翻译:文本分类努力将每个类的标签文字实例很少, 并推广到看不见的类。 在这种少见的学习( FSL) 设置中, 基于标准的元化学习方法显示了有希望的结果 。 以前的研究主要旨在为每个类得出一个原型代表。 但是, 它们忽略了构建一个表达每类全部含义的统括表示式是具有挑战性的, 但却不必要。 它们也忽略了捕捉查询与对少见文本分类的支持之间的相互依赖性的重要性 。 为了处理这些问题, 我们提议一种基于元学习的MGIMM 方法, 进行以实例为基础的比较, 并随后进行汇总, 产生类匹配矢量, 而不是原型学习 。 实例比较的关键是, 在特定类背景和特定事件背景下的互动匹配。 广泛的实验表明, 拟议的方法在标准 FSL 和 通用的 FSL 设置下, 都大大超过现有的最新方法 。