Knowledge-dependent tasks typically use two sources of knowledge: parametric, learned at training time, and contextual, given as a passage at inference time. To understand how models use these sources together, we formalize the problem of knowledge conflicts, where the contextual information contradicts the learned information. Analyzing the behaviour of popular models, we measure their over-reliance on memorized information (the cause of hallucinations), and uncover important factors that exacerbate this behaviour. Lastly, we propose a simple method to mitigate over-reliance on parametric knowledge, which minimizes hallucination, and improves out-of-distribution generalization by 4%-7%. Our findings demonstrate the importance for practitioners to evaluate model tendency to hallucinate rather than read, and show that our mitigation strategy encourages generalization to evolving information (i.e., time-dependent queries). To encourage these practices, we have released our framework for generating knowledge conflicts.
翻译:依赖知识的任务通常使用两种知识来源:参数学,在培训时学习,背景学,作为推论时间的一段通道。为了了解模型如何共同使用这些来源,我们正式确定知识冲突问题,因为背景信息与所学信息相矛盾。分析流行模式的行为,我们衡量它们过分依赖记忆信息(幻觉的原因),发现使这种行为加剧的重要因素。最后,我们提出一个简单的方法来减少对参数学的过度依赖,这种过度依赖可以减少幻觉,并使分配外的普及率提高4%至7%。我们的研究结果表明,实践者必须评估幻觉而非阅读的模型趋势,并表明我们的缓解战略鼓励信息(即依赖时间的询问)的普及。为了鼓励这些做法,我们发布了产生知识冲突的框架。