Generative large language models (LLMs), e.g., ChatGPT, have demonstrated remarkable proficiency across several NLP tasks such as machine translation, question answering, text summarization, and natural language understanding. Recent research has shown that utilizing ChatGPT for assessing the quality of machine translation (MT) achieves state-of-the-art performance at the system level but performs poorly at the segment level. To further improve the performance of LLMs on MT quality assessment, we conducted an investigation into several prompting methods. Our results indicate that by combining Chain-of-Thoughts and Error Analysis, a new prompting method called \textbf{\texttt{Error Analysis Prompting}}, LLMs like ChatGPT can \textit{generate human-like MT evaluations at both the system and segment level}. Additionally, we discovered some limitations of ChatGPT as an MT evaluator, such as unstable scoring and biases when provided with multiple translations in a single query. Our findings aim to provide a preliminary experience for appropriately evaluating translation quality on ChatGPT while offering a variety of tricks in designing prompts for in-context learning. We anticipate that this report will shed new light on advancing the field of translation evaluation with LLMs by enhancing both the accuracy and reliability of metrics. The project can be found in \url{https://github.com/Coldmist-Lu/ErrorAnalysis_Prompt}.
翻译:生成性大型语言模型 (LLMs),例如 ChatGPT,在多个 NLP 任务(如机器翻译、问答、文本摘要和自然语言理解)中表现出了出色的技能。最近的研究表明,利用 ChatGPT 评估机器翻译质量在系统级别上实现了最先进的性能,但在区段级别上表现不佳。为进一步提高 LLMs 在机器翻译质量评估上的性能,我们进行了几种提示方法的调查。我们的结果表明,通过结合思维链和错误分析,一种新的提示方法称为 “误差分析提示”,LLMs(如 ChatGPT)可以在系统和区段级别上生成类人机器翻译评估。此外,我们发现了一些 ChatGPT 作为机器翻译评估器的局限,例如在单个查询中提供多个翻译时得分不稳定且具有偏差。我们的发现旨在为在上下文学习中设计提示提供各种技巧,同时为推进 LLMs 领域的翻译评估提供足够的准确性和可靠性度量标准。此项目可在 \url{https://github.com/Coldmist-Lu/ErrorAnalysis_Prompt} 中找到。