Large language models (LLMs) like ChatGPT and GPT-4 have exhibited remarkable abilities on a wide range of natural language processing (NLP) tasks, including various machine translation abilities accomplished during chat. However, these models are only accessible through restricted APIs, which creates barriers to new research and advancements in the field. Therefore, we propose the $\mathbf{ParroT}$ framework to enhance and regulate the translation abilities during chat based on open-sourced LLMs (i.e., LLaMA-7b) and human written translation and evaluation data. Specifically, ParroT reformulates translation data into the instruction-following style, and introduces a "Hint" field for incorporating extra requirements to regulate the translation process. Accordingly, we propose three instruction types for finetuning ParroT models, including translation instruction, contrastive instruction, and error-guided instruction. Experiments on Flores subsets and WMT22 test sets suggest that translation instruction improves the translation performance of vanilla LLMs significantly while error-guided instruction can lead to a further improvement, which demonstrates the importance of learning from low-quality translations annotated by human. Meanwhile, the ParroT models can also preserve the ability on general tasks with the Alpaca multi-task dataset involved in finetuning. Codes: https://github.com/wxjiao/ParroT
翻译:大型语言模型(LLMs)如ChatGPT和GPT-4已经在各种自然语言处理(NLP)任务中展现出卓越的能力,包括在聊天过程中完成各种机器翻译任务。但是,这些模型只能通过受限的API进行访问,这给新的研究和领域发展带来了障碍。因此,我们提出ParroT框架,基于开放源代码的LLMs(即LLaMA-7b)和人类翻译和评估数据来增强和规范聊天过程中的翻译能力。具体而言,ParroT将翻译数据重新制定为“遵循指令”的风格,并引入了“提示”字段,以整合额外的需求来调节翻译过程。因此,我们提出了三种指令类型来调整ParroT模型,包括翻译指令、对比指令和错误引导指令。在Flores子集和WMT22测试集上的实验表明,翻译指令显著提高了基本LLMs的翻译性能,而错误引导指令则可以进一步提高翻译质量,这证明了通过人类注释的低质量翻译学习的重要性。同时,ParroT模型也可以保持在多任务学习过程中一般任务的能力。代码:https://github.com/wxjiao/ParroT