Although large language models (LLMs) often produce impressive outputs, they also fail to reason and be factual. We set out to investigate how these limitations affect the LLM's ability to answer and reason about difficult real-world based questions. We applied the human-aligned GPT-3 (InstructGPT) to answer multiple-choice medical exam questions (USMLE and MedMCQA) and medical research questions (PubMedQA). We investigated Chain-of-thought (think step by step) prompts, grounding (augmenting the prompt with search results) and few-shot (prepending the question with question-answer exemplars). For a subset of the USMLE questions, a medical domain expert reviewed and annotated the model's reasoning. Overall, GPT-3 achieved a substantial improvement in state-of-the-art machine learning performance. We observed that GPT-3 is often knowledgeable and can reason about medical questions. GPT-3, when confronted with a question it cannot answer, will still attempt to answer, often resulting in a biased predictive distribution. LLMs are not on par with human performance but our results suggest the emergence of reasoning patterns that are compatible with medical problem-solving. We speculate that scaling model and data, enhancing prompt alignment and allowing for better contextualization of the completions will be sufficient for LLMs to reach human-level performance on this type of task.
翻译:尽管大型语言模式(LLMS)往往产生令人印象深刻的产出,但它们也缺乏理性和事实性。我们着手调查这些限制如何影响LLM对基于现实的难题的回答和解释能力。我们运用了人与人之间的GPT-3(InstructGPT)来回答多种选择的医学考试问题(USMLE和MedMCQA)和医学研究问题(PubMedQA ) 。我们调查了思维链(逐步思考)的触发(以搜索结果来加速)和微弱的点击(在问题以解答表解问题之前)。对于美国MLME的一组问题,我们运用了一位医学领域专家来审查并附加了模型推理。总体来说,GPT-3在最新机器学习成绩方面取得了显著的改进。我们发现GPT-3常常很了解并且可以解释医学问题。GPT-3在遇到一个无法回答的问题时,仍然试图回答,往往导致有偏差的预测分布。LMS模型与人类业绩不同,但我们并没有与人类业绩相近的模型,而我们又能够推理地推理地推理地推理,这样推理地推理地推理。