Large language models (LLMs) such as GPT-3 and ChatGPT have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to 'hallucinate' facts, and there are concerns about their underlying biases. Letting models verbalize reasoning steps as natural language, a technique known as chain-of-thought prompting, has recently been proposed as a way to address some of these issues. Here we present the first release of ThoughtSource, a meta-dataset and software library for chain-of-thought (CoT) reasoning. The goal of ThoughtSource is to improve future artificial intelligence systems by facilitating qualitative understanding of CoTs, enabling empirical evaluations, and providing training data. This first release of ThoughtSource integrates six scientific/medical, three general-domain and five math word question answering datasets.
翻译:大型语言模型(LLMs),如GPT-3和ChattGPT等大型语言模型(LLMs)最近在一系列广泛的任务中显示出了令人印象深刻的成果。但LLMs仍然有限,因为它们常常在复杂的推理中失败,其推理过程不透明,其推理过程不透明,容易出现“Hallucate”事实,而且人们对其根本偏见感到关切。最近有人提议将推理步骤作为自然语言,一种被称为思维链的催生技术,以此解决其中的一些问题。我们在这里介绍思想源头版的首版,一个供思考链推理的元数据集和软件库。“思想源”的目标是通过促进对COT的定性理解、授权经验评估以及提供培训数据来改进未来的人工智能系统。“思想源”的首版将六个科学/医学、三个通用和五个数学词解答数据集整合在一起。</s>