Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer. However, existing CoT studies are mostly isolated in the language modality with LLMs, where LLMs are hard to deploy. To elicit CoT reasoning in multimodality, a possible solution is to fine-tune small language models by fusing the vision and language features to perform CoT reasoning. The key challenge is that those language models tend to generate hallucinated reasoning chains that mislead the answer inference. To mitigate the effect of such mistakes, we propose Multimodal-CoT that incorporates vision features in a decoupled training framework. The framework separates the rationale generation and answer inference into two stages. By incorporating the vision features in both stages, the model is able to generate effective rationales that contribute to answer inference. With Multimodal-CoT, our model under 1 billion parameters outperforms the previous state-of-the-art LLM (GPT-3.5) by 16% (75.17%->91.68%) on the ScienceQA benchmark and even surpasses human performance. Code is publicly available at https://github.com/amazon-science/mm-cot.
翻译:大型语言模型(LLMS)通过利用思维链(CoT)在复杂的推理中表现出了令人印象深刻的成绩,促使产生中间推理链,作为推算答案的理由。然而,现有的CoT研究大多与LLMS(LLMS难以部署)在语言模式中孤立,LLMS(LLMS)是难以部署的。为了在多式联运中引领COT推理,一个可能的解决方案是利用视觉和语言特征来完成CoT推理,从而微调小型语言模型。关键挑战是这些语言模型往往产生迷惑性推理链,误导答案的推理。为了减轻这些错误的影响,我们建议Multomodal-CoT(MT-CoT)将视觉特征纳入拆分解的培训框架。框架将理由生成和回答推理分为两个阶段。通过将愿景特征纳入两个阶段,该模型能够产生有效的原理,有助于解释Coutalal-Cot(GPT-LMM-GPT-LM-3..5)比以往的状态(GPT-T-LM-3.5),16%-9.177%->91/918/Ms/MSBs/Syalas-Science/Sciencealmaxxxxxxxxxxxxyalal/Syalxxxxxxxxxxxxxxx/Mx/Mxxxxxxxx/Mxxxxxxx/Mx/Mxxxxxxxxxxxxx/Mx/Mxxxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxx/x/xxxxxxxxxxxx/xxxxxxxx/x/x/x/x/xxxxxxxxxxxxxxx/x/xxxxx/x/x/x/x/xxx/