Multimodal large language models (MLLMs) have achieved remarkable progress on vision-language tasks, yet their reasoning processes remain sometimes unreliable. We introduce PRISM-Bench, a benchmark of puzzle-based visual challenges designed to evaluate not only whether models can solve problems, but how their reasoning unfolds. Unlike prior evaluations that measure only final-answer accuracy, PRISM-Bench introduces a diagnostic task: given a visual puzzle and a step-by-step chain-of-thought (CoT) containing exactly one error, models must identify the first incorrect step. This setting enables fine-grained assessment of logical consistency, error detection, and visual reasoning. The puzzles in PRISM-Bench require multi-step symbolic, geometric, and analogical reasoning, resisting shortcuts based on superficial pattern matching. Evaluations across state-of-the-art MLLMs reveal a persistent gap between fluent generation and faithful reasoning: models that produce plausible CoTs often fail to locate simple logical faults. By disentangling answer generation from reasoning verification, PRISM-Bench offers a sharper lens on multimodal reasoning competence and underscores the need for diagnostic evaluation protocols in the development of trustworthy MLLMs.
翻译:多模态大语言模型(MLLMs)在视觉语言任务上取得了显著进展,但其推理过程有时仍不可靠。我们提出了PRISM-Bench,一个基于谜题的视觉挑战基准,旨在评估模型不仅能解决问题,还能揭示其推理过程。与先前仅衡量最终答案准确性的评估不同,PRISM-Bench引入了一项诊断任务:给定一个视觉谜题和一个包含恰好一个错误的逐步思维链(CoT),模型必须识别出第一个错误步骤。这一设定能够对逻辑一致性、错误检测和视觉推理进行细粒度评估。PRISM-Bench中的谜题需要多步符号、几何和类比推理,抵制基于表面模式匹配的捷径。对多个先进MLLMs的评估揭示了流畅生成与忠实推理之间的持续差距:能够生成合理思维链的模型往往无法定位简单的逻辑错误。通过将答案生成与推理验证分离,PRISM-Bench为多模态推理能力提供了更清晰的视角,并强调了在开发可信赖MLLMs过程中诊断性评估协议的必要性。