Large Language Models (LLMs) have demonstrated remarkable proficiency in generating text that closely resemble human writing. However, they often generate factually incorrect statements, a problem typically referred to as 'hallucination'. Addressing hallucination is crucial for enhancing the reliability and effectiveness of LLMs. While much research has focused on hallucinations in English, our study extends this investigation to conversational data in three languages: Hindi, Farsi, and Mandarin. We offer a comprehensive analysis of a dataset to examine both factual and linguistic errors in these languages for GPT-3.5, GPT-4o, Llama-3.1, Gemma-2.0, DeepSeek-R1 and Qwen-3. We found that LLMs produce very few hallucinated responses in Mandarin but generate a significantly higher number of hallucinations in Hindi and Farsi.
翻译:大型语言模型(LLMs)在生成高度接近人类写作的文本方面展现出卓越能力。然而,它们经常生成事实错误的陈述,这一问题通常被称为“幻觉”。解决幻觉问题对于提升LLMs的可靠性和有效性至关重要。尽管现有研究多集中于英语中的幻觉现象,本研究将这一探究扩展至三种语言的对话数据:印地语、波斯语和汉语。我们通过对数据集的全面分析,考察了GPT-3.5、GPT-4o、Llama-3.1、Gemma-2.0、DeepSeek-R1和Qwen-3在这些语言中产生的事实性错误与语言错误。研究发现,LLMs在汉语中产生的幻觉响应极少,但在印地语和波斯语中生成的幻觉数量显著更高。