Large language models (LLMs) have demonstrated remarkable potential in handling multilingual machine translation (MMT). In this paper, we systematically investigate the advantages and challenges of LLMs for MMT by answering two questions: 1) How well do LLMs perform in translating massive languages? 2) Which factors affect LLMs' performance in translation? We thoroughly evaluate eight popular LLMs, including ChatGPT and GPT-4. Our empirical results show that translation capabilities of LLMs are continually improving. GPT-4 has beat the strong supervised baseline NLLB in 40.91% of translation directions but still faces a large gap towards the commercial translation system, especially on low-resource languages. Through further analysis, we discover that LLMs exhibit new working patterns when used for MMT. First, instruction semantics can surprisingly be ignored when given in-context exemplars. Second, cross-lingual exemplars can provide better task guidance for low-resource translation than exemplars in the same language pairs. Third, LLM can acquire translation ability in a resource-efficient way and generate moderate translation even on zero-resource languages.
翻译:暂无翻译