Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer. However, existing CoT studies are mostly isolated in the language modality with LLMs, where LLMs are hard to deploy. To elicit CoT reasoning in multimodality, a possible solution is to fine-tune small language models by fusing the vision and language features to perform CoT reasoning. The key challenge is that those language models tend to generate hallucinated reasoning chains that mislead the answer inference. To mitigate the effect of such mistakes, we propose Multimodal-CoT that incorporates vision features. The framework separates the rationale generation and answer inference into two stages. By incorporating the vision features in both stages, the model is able to generate effective rationales that contribute to answer inference. With Multimodal-CoT, our model under 1 billion parameters outperforms the previous state-of-the-art LLM (GPT-3.5) by 16% (75.17%->91.68%) on the ScienceQA benchmark and even surpasses human performance. Code is publicly available at https://github.com/amazon-science/mm-cot.
翻译:大型语言模型(LLMS)通过利用思维链(CoT)来生成中间推理链作为推算答案的理由,在复杂的推理中表现出了令人印象深刻的成绩。然而,现有的CoT研究大多与LLMS(LLMS难以部署)在语言模式中孤立,LLMS(LLMS)是很难部署的。为了在多式联运中引领COT推理,一个可能的解决方案是利用视觉和语言特征来完成CoT推理,从而微调小语言模型。关键挑战是这些语言模型往往产生迷惑的推理链,误导答案的推理。为了减轻这些错误的影响,我们提议采用多式推理链(Multomodal-CoT),其中含有视觉特征。这个框架将理由生成和回答推理分为两个阶段。通过将视野特性纳入两个阶段,这个模型能够产生有效的原理,有助于解错判。在10亿参数下,我们的模型比先前的科学-艺术LM(GPTM-3.5)状态(GPT-LM-3.5)的参数(75.17%-9.8%.