A major concern when deploying LLMs in accuracy-critical domains such as sports reporting is that the generated text may not faithfully reflect the input data. We quantify how input structure affects hallucinations and other factual errors in LLM-generated summaries of NBA play-by-play data, across three formats: row-structured, JSON and unstructured. We manually annotated 3,312 factual errors across 180 game summaries produced by two models, Llama-3.1-70B and Qwen2.5-72B. Input structure has a strong effect: JSON input reduces error rates by 69% for Llama and 65% for Qwen compared to unstructured input, while row-structured input reduces errors by 54% for Llama and 51% for Qwen. A two-way repeated measures ANOVA shows that input structure accounts for over 80% of the variance in error rates, with Tukey HSD post hoc tests confirming statistically significant differences between all input formats.
翻译:在体育报道等准确性要求极高的领域部署大型语言模型(LLM)时,一个主要担忧是生成文本可能无法忠实反映输入数据。我们量化了输入结构如何影响LLM生成的NBA比赛解说摘要中的幻觉及其他事实性错误,并比较了三种输入格式:行结构、JSON和非结构化。我们手动标注了由Llama-3.1-70B和Qwen2.5-72B两个模型生成的180场比赛摘要中的3,312个事实错误。输入结构具有显著影响:与非结构化输入相比,JSON输入使Llama的错误率降低69%,Qwen降低65%;而行结构输入使Llama的错误率降低54%,Qwen降低51%。双向重复测量方差分析表明,输入结构解释了错误率方差的80%以上,Tukey HSD事后检验证实所有输入格式之间存在统计学显著差异。