Natural language processing (NLP) tasks (e.g. question-answering in English) benefit from knowledge of other tasks (e.g. named entity recognition in English) and knowledge of other languages (e.g. question-answering in Spanish). Such shared representations are typically learned in isolation, either across tasks or across languages. In this work, we propose a meta-learning approach to learn the interactions between both tasks and languages. We also investigate the role of different sampling strategies used during meta-learning. We present experiments on five different tasks and six different languages from the XTREME multilingual benchmark dataset. Our meta-learned model clearly improves in performance compared to competitive baseline models that also include multi-task baselines. We also present zero-shot evaluations on unseen target languages to demonstrate the utility of our proposed model.
翻译:自然语言处理(NLP)任务(例如,英语问答)得益于其他任务的知识(例如,英文名称实体识别)和其他语言的知识(例如,西班牙语问答),这种共享的表述方式通常是孤立地、跨任务或跨语言地学习的,在这项工作中,我们建议采用元学习方法来学习任务和语言之间的互动;我们还调查元学习期间使用的不同抽样战略的作用;我们介绍了与XTREME多语言基准数据集的5种不同任务和6种不同语言的实验;我们元学模型与包含多任务基线的竞争性基线模型相比,在业绩方面明显改进;我们还介绍了对非目标语言的零效果评价,以展示我们拟议模式的效用。