In healthcare, it is essential for any LLM-generated output to be reliable and accurate, particularly in cases involving decision-making and patient safety. However, the outputs are often unreliable in such critical areas due to the risk of hallucinated outputs from the LLMs. To address this issue, we propose a fact-checking module that operates independently of any LLM, along with a domain-specific summarization model designed to minimize hallucination rates. Our model is fine-tuned using Low-Rank Adaptation (LoRa) on the MIMIC III dataset and is paired with the fact-checking module, which uses numerical tests for correctness and logical checks at a granular level through discrete logic in natural language processing (NLP) to validate facts against electronic health records (EHRs). We trained the LLM model on the full MIMIC-III dataset. For evaluation of the fact-checking module, we sampled 104 summaries, extracted them into 3,786 propositions, and used these as facts. The fact-checking module achieves a precision of 0.8904, a recall of 0.8234, and an F1-score of 0.8556. Additionally, the LLM summary model achieves a ROUGE-1 score of 0.5797 and a BERTScore of 0.9120 for summary quality.
翻译:在医疗健康领域,任何由大语言模型生成的输出都必须可靠且准确,尤其是在涉及决策制定和患者安全的情况下。然而,由于大语言模型存在产生幻觉输出的风险,其在这些关键领域的输出往往并不可靠。为解决这一问题,我们提出了一个独立于任何大语言模型运行的事实核查模块,以及一个旨在最小化幻觉率的领域特定摘要模型。我们的模型在MIMIC III数据集上使用低秩适应(LoRa)进行微调,并与事实核查模块配对。该模块通过自然语言处理(NLP)中的离散逻辑,在细粒度层面进行数值正确性测试和逻辑检查,以根据电子健康记录(EHRs)验证事实。我们在完整的MIMIC-III数据集上训练了大语言模型。为评估事实核查模块,我们采样了104份摘要,将其提取为3,786个命题,并以此作为事实。该事实核查模块实现了0.8904的精确率、0.8234的召回率和0.8556的F1分数。此外,大语言模型摘要模型在摘要质量方面获得了0.5797的ROUGE-1分数和0.9120的BERTScore。