An effective method for cross-lingual transfer is to fine-tune a bilingual or multilingual model on a supervised dataset in one language and evaluating it on another language in a zero-shot manner. Translating examples at training time or inference time are also viable alternatives. However, there are costs associated with these methods that are rarely addressed in the literature. In this work, we analyze cross-lingual methods in terms of their effectiveness (e.g., accuracy), development and deployment costs, as well as their latencies at inference time. Our experiments on three tasks indicate that the best cross-lingual method is highly task-dependent. Finally, by combining zero-shot and translation methods, we achieve the state-of-the-art in two of the three datasets used in this work. Based on these results, we question the need for manually labeled training data in a target language. Code and translated datasets are available at https://github.com/unicamp-dl/cross-lingual-analysis
翻译:一种有效的跨语言传输方法是,对一种语文的受监督数据集的双语或多语文模式进行微调,以零点方式对一种语文进行评价,对另一种语文进行评价;在培训时间或推论时间转换实例也是可行的替代办法;然而,这些方法的费用很少在文献中讨论;在这项工作中,我们分析跨语言方法的有效性(例如准确性)、开发和部署费用及其在推论时间的拖延。我们在三项任务上的实验表明,最佳的跨语言方法高度依赖任务。最后,通过将零点和翻译方法结合起来,我们在这项工作使用的三种数据集中的两种中达到最新水平。根据这些结果,我们质疑是否需要用目标语言进行人工标记的培训数据。代码和翻译数据集可在https://github.com/unamp-dl/crosy-languy-assy-assul查阅。