Prompt-based tuning has been proven effective for pretrained language models (PLMs). While most of the existing work focuses on the monolingual prompts, we study the multilingual prompts for multilingual PLMs, especially in the zero-shot cross-lingual setting. To alleviate the effort of designing different prompts for multiple languages, we propose a novel model that uses a unified prompt for all languages, called UniPrompt. Different from the discrete prompts and soft prompts, the unified prompt is model-based and language-agnostic. Specifically, the unified prompt is initialized by a multilingual PLM to produce language-independent representation, after which is fused with the text input. During inference, the prompts can be pre-computed so that no extra computation cost is needed. To collocate with the unified prompt, we propose a new initialization method for the target label word to further improve the model's transferability across languages. Extensive experiments show that our proposed methods can significantly outperform the strong baselines across different languages. We will release data and code to facilitate future research.
翻译:事实证明,对经过培训的语言模式(PLM)进行快速调试是行之有效的。虽然大多数现有工作都侧重于单一语言的提示,但我们研究多语言的多语言PLM的多语言提示,特别是在零点跨语言环境中。为了减轻设计不同语言的提示的努力,我们提议了一个新颖的模式,对所有语言使用统一的提示,称为UniPrompt。与分散的提示和软提示不同,统一提示是以模型为基础和语言通晓的。具体地说,统一的提示由多语言的PLM开始,以产生依赖语言的表述,然后与文本输入相结合。在推断过程中,提示可以预先计算,这样不需要额外的计算费用。要与统一快速调试,我们提议一个目标标签词的新的初始化方法,以进一步改进模型跨语言的可传输性。广泛的实验表明,我们提出的方法可以大大超过不同语言的强大基线。我们将发布数据和代码,以便利未来的研究。