Building causal graphs can be a laborious process. To ensure all relevant causal pathways have been captured, researchers often have to discuss with clinicians and experts while also reviewing extensive relevant medical literature. By encoding common and medical knowledge, large language models (LLMs) represent an opportunity to ease this process by automatically scoring edges (i.e., connections between two variables) in potential graphs. LLMs however have been shown to be brittle to the choice of probing words, context, and prompts that the user employs. In this work, we evaluate if LLMs can be a useful tool in complementing causal graph development.
翻译:构建因果图表可能是一个艰巨的过程。 为确保获取所有相关因果路径,研究人员往往必须与临床医生和专家讨论,同时审查广泛的相关医学文献。通过对普通和医学知识进行编码,大型语言模型(LLMs)代表了通过在潜在图表中自动评分边缘(即两个变量之间的联系)来缓解这一过程的一个机会。然而,LLMs已经证明对选择验证词、上下文和用户使用的提示十分不利。 在这项工作中,我们评估LLMMs能否成为补充因果图表开发的有用工具。</s>