Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer. However, existing CoT studies have focused on the language modality. We propose Multimodal-CoT that incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference. In this way, answer inference can leverage better generated rationales that are based on multimodal information. With Multimodal-CoT, our model under 1 billion parameters outperforms the previous state-of-the-art LLM (GPT-3.5) by 16% (75.17%->91.68%) on the ScienceQA benchmark and even surpasses human performance. Code is publicly available available at https://github.com/amazon-science/mm-cot.
翻译:大型语言模型(LLMs)通过利用思维链(CoT)促进产生中间推理链作为推算答案的理由,在复杂的推理中表现出了令人印象深刻的成绩。然而,现有的CoT研究侧重于语言模式。我们建议将多模式-CoT(Multimodal-CoT)模式纳入一个将语言(文本)和视觉(图像)模式分开的两阶段框架,将理由产生和回答推论分开。这样,答案推论就可以利用基于多式信息的更好推理。在Multimodal-CoT(Mult-CoT)下,我们10亿个参数下的模型比科学QA基准(75.17%->91.68%)比以前的水平LLM(GPT-3.5)高出16%(75.17%->91.68%),甚至超过人类业绩。守则可在https://github.com/amazon-science/mm-cot上公开查阅。