Prompting shows promising results in few-shot scenarios. However, its strength for multilingual/cross-lingual problems has not been fully exploited. Zhao and Sch\"utze (2021) made initial explorations in this direction by presenting that cross-lingual prompting outperforms cross-lingual finetuning. In this paper, we conduct empirical analysis on the effect of each component in cross-lingual prompting and derive Universal Prompting across languages, which helps alleviate the discrepancies between source-language training and target-language inference. Based on this, we propose a mask token augmentation framework to further improve the performance of prompt-based cross-lingual transfer. Notably, for XNLI, our method achieves 46.54% with only 16 English training examples per class, significantly better than 34.99% of finetuning.
翻译:提示显示在几近的情景下取得了可喜的结果。 但是,它对于多语种/跨语言问题的力量尚未得到充分利用。 赵和施茨(2021年)通过展示跨语言的促销优于跨语言的微调,初步探索了这一方向。 在本文中,我们对每个组成部分在跨语言的促销和跨语言的普及提示方面的影响进行了经验分析,这有助于缓解源语言培训和目标语言推论之间的差异。 在此基础上,我们提出了一个面罩强化框架,以进一步改进基于快速的跨语言转移的绩效。 值得注意的是,对于 XNLI,我们的方法达到了46.54%,每班只有16个英语培训实例,大大高于微调的34.99%。