Large language models (LLMs) like ChatGPT and GPT-4 have exhibited remarkable abilities on a wide range of natural language processing (NLP) tasks, including various machine translation abilities accomplished during chat. However, these models are only accessible through restricted APIs, which creates barriers to new research and advancements in the field. Therefore, we propose the $\mathbf{ParroT}$ framework to enhance and regulate the translation abilities during chat based on open-sourced LLMs (i.e., LLaMA-7b) and human written translation and evaluation data. Specifically, ParroT reformulates translation data into the instruction-following style, and introduces a "Hint" field for incorporating extra requirements to regulate the translation process. Accordingly, we propose three instruction types for finetuning ParroT models, including translation instruction, contrastive instruction, and error-guided instruction. Experiments on two Flores subsets and WMT22 test sets suggest that translation instruction improves the translation performance of vanilla LLMs significantly while error-guided instruction can lead to a further improvement, which demonstrates the importance of learning from low-quality translations annotated by human. Meanwhile, the ParroT models can also preserve the ability on general tasks with the Alpaca multi-task dataset involved in finetuning. Codes: https://github.com/wxjiao/ParroT
翻译:大型语言模型(LLM)如ChatGPT和GPT-4展示了在广泛的自然语言处理(NLP)任务中的卓越能力,包括通过聊天完成各种机器翻译能力。然而,这些模型只能通过受限的API访问,这为新的研究和领域进步带来了障碍。因此,我们提出了$\textbf{ParroT}$框架,利用开源LLM(即LLaMA-7b)和人类编写的翻译和评估数据来增强和调节聊天期间的翻译能力。具体来说,ParroT将翻译数据重新格式化为指令遵循样式,并引入“提示”字段以纳入额外的要求以调节翻译过程。因此,我们提出了三种指令类型来微调ParroT模型,包括翻译指令、对比指令和误导指令。对Flores的两个子集和WMT22测试集进行的实验表明,翻译指令显着提高了香草LLM的翻译性能,而误导指令则可以进一步提高性能,这证明了从人类注释的低质量翻译中进行学习的重要性。同时,ParroT模型还可以保留在多任务数据集Alpaca中进行微调的泛化能力。代码:https://github.com/wxjiao/ParroT