Neural text matching models have been used in a range of applications such as question answering and natural language inference, and have yielded a good performance. However, these neural models are of a limited adaptability, resulting in a decline in performance when encountering test examples from a different dataset or even a different task. The adaptability is particularly important in the few-shot setting: in many cases, there is only a limited amount of labeled data available for a target dataset or task, while we may have access to a richly labeled source dataset or task. However, adapting a model trained on the abundant source data to a few-shot target dataset or task is challenging. To tackle this challenge, we propose a Meta-Weight Regulator (MWR), which is a meta-learning approach that learns to assign weights to the source examples based on their relevance to the target loss. Specifically, MWR first trains the model on the uniformly weighted source examples, and measures the efficacy of the model on the target examples via a loss function. By iteratively performing a (meta) gradient descent, high-order gradients are propagated to the source examples. These gradients are then used to update the weights of source examples, in a way that is relevant to the target performance. As MWR is model-agnostic, it can be applied to any backbone neural model. Extensive experiments are conducted with various backbone text matching models, on four widely used datasets and two tasks. The results demonstrate that our proposed approach significantly outperforms a number of existing adaptation methods and effectively improves the cross-dataset and cross-task adaptability of the neural text matching models in the few-shot setting.
翻译:神经文本匹配模型被用于一系列应用,例如问答和自然语言推断,并产生了良好的性能。然而,这些神经模型的适应性有限,在遇到不同数据集的测试示例时导致性能下降,这导致在遇到不同数据集或甚至不同任务中的测试示例时出现性能下降。适应性在几个镜头设置中特别重要:在许多情况下,目标数据集或任务中只有数量有限的标签数据可用,而我们可能有机会获得一个贴有丰富标签的源数据集或任务。然而,将大量源数据培训过的模型改换成几个发目标数据集或任务是具有挑战性的。为了应对这一挑战,我们建议采用Meta-Weight 调控(MWRR),这是一种元学习方法,根据与目标损失的相关性,对源示例进行权重分配。具体地说,MWRW首先将模型用统一加权源示例对模型的功效进行测试,然后通过损失函数来测量目标示例中的模型的功效。通过迭代性地进行(元) 跨级脱层的脱度模型和高阶梯梯度模型将模型传播到源底数示例中。这些梯值是用于四级模型中。 这些梯度的模型是用来更新工具中的任何值。 这些梯值是用来更新工具用于用于相关的模型。