Transferring knowledge from one domain to another is of practical importance for many tasks in natural language processing, especially when the amount of available data in the target domain is limited. In this work, we propose a novel few-shot approach to domain adaptation in the context of Named Entity Recognition (NER). We propose a two-step approach consisting of a variable base module and a template module that leverages the knowledge captured in pre-trained language models with the help of simple descriptive patterns. Our approach is simple yet versatile and can be applied in few-shot and zero-shot settings. Evaluating our lightweight approach across a number of different datasets shows that it can boost the performance of state-of-the-art baselines by 2-5% F1-score.
翻译:将知识从一个领域转移到另一个领域对于自然语言处理中的许多任务具有实际重要性,特别是在目标领域现有数据的数量有限的情况下。在这项工作中,我们提议在名称实体识别(NER)背景下对域的适应工作采取新颖的微小方法。我们提议采取两步方法,包括一个变量基模块和一个模板模块,借助简单的描述模式,利用在培训前语言模型中获取的知识。我们的方法简单而灵活,可以适用于几个点和零点设置。对不同数据集的轻量方法进行评估表明,它可以提高2-5%的F1核心水平,提高最先进的基线的性能。