Multilingual transformers (XLM, mT5) have been shown to have remarkable transfer skills in zero-shot settings. Most transfer studies, however, rely on automatically translated resources (XNLI, XQuAD), making it hard to discern the particular linguistic knowledge that is being transferred, and the role of expert annotated monolingual datasets when developing task-specific models. We investigate the cross-lingual transfer abilities of XLM-R for Chinese and English natural language inference (NLI), with a focus on the recent large-scale Chinese dataset OCNLI. To better understand linguistic transfer, we created 4 categories of challenge and adversarial tasks (totaling 17 new datasets) for Chinese that build on several well-known resources for English (e.g., HANS, NLI stress-tests). We find that cross-lingual models trained on English NLI do transfer well across our Chinese tasks (e.g., in 3/4 of our challenge categories, they perform as well/better than the best monolingual models, even on 3/5 uniquely Chinese linguistic phenomena such as idioms, pro drop). These results, however, come with important caveats: cross-lingual models often perform best when trained on a mixture of English and high-quality monolingual NLI data (OCNLI), and are often hindered by automatically translated resources (XNLI-zh). For many phenomena, all models continue to struggle, highlighting the need for our new diagnostics to help benchmark Chinese and cross-lingual models. All new datasets/code are released at https://github.com/huhailinguist/ChineseNLIProbing.
翻译:多语言变压器( XLM, mT5) 已被证明在零发环境中具有非凡的传输技能。 然而,大多数传输研究都依赖于自动翻译资源( XNLI, XQUAD),因此很难辨别正在转让的特定语言知识,以及专家在开发特定任务模型时的附加说明的单语言数据集的作用。我们调查了 XLM-R 用于中文和英语自然语言推断( NLI) 的跨语言传输能力,重点是最近的大规模中国诊断数据集 OCNLI 。为了更好地了解语言转移,我们为中文创建了4类挑战和对称任务(共17个新数据集),这4类是建立在一些众所周知的英语资源之上的(例如, HANNS, NLI压力测试) 。我们发现,经过英语NLILI培训的跨语言模型在我们中国任务中( 例如,3/4个模型中,它们的表现优于最好的单一语言模型,甚至3/5个独特的中国语言现象,例如 idrocial- club climal luding supal lauds) 。 这些结果, 通常由经过训练的英语质量数据, 以高语言数据 。