Prompting pre-trained language models has achieved impressive performance on various NLP tasks, especially in low data regimes. Despite the success of prompting in monolingual settings, applying prompt-based methods in multilingual scenarios has been limited to a narrow set of tasks, due to the high cost of handcrafting multilingual prompts. In this paper, we present the first work on prompt-based multilingual relation classification (RC), by introducing an efficient and effective method that constructs prompts from relation triples and involves only minimal translation for the class labels. We evaluate its performance in fully supervised, few-shot and zero-shot scenarios, and analyze its effectiveness across 14 languages, prompt variants, and English-task training in cross-lingual settings. We find that in both fully supervised and few-shot scenarios, our prompt method beats competitive baselines: fine-tuning XLM-R_EM and null prompts. It also outperforms the random baseline by a large margin in zero-shot experiments. Our method requires little in-language knowledge and can be used as a strong baseline for similar multilingual classification tasks.
翻译:尽管在单语环境中成功推行了以快速为基础的方法,但在多语种情景中,由于手工制作多语种提示的高昂成本,采用快速方法的速成语言模型(RC)取得了令人印象深刻的成绩,特别是在低数据制度中。尽管在单语环境中取得了成功,但在多语种情景中采用快速方法,由于手工制作多语种提示的高昂成本,在多语种关系分类方面,我们提出了第一份基于快速的多语种关系分类(RC)工作,采用了一种高效和有效的方法,从关系三重中建立速率,只涉及对类类标签进行最低限度的翻译。我们评估了在充分监督、少发和零发情景中的表现,分析了其在14种语言中的效果、快速变异和跨语种环境中的英语任务培训。我们发现,在完全监督下和少发的情景中,我们快速方法都超越了竞争性基准:微调 XLM-R_EM和无光提示。它还在零光实验中通过一个大的边际边际基线。我们的方法需要很少的语言知识,可以用作类似的多语种分类任务的有力基线。