We introduce Chart2Code, a new benchmark for evaluating the chart understanding and code generation capabilities of large multimodal models (LMMs). Chart2Code is explicitly designed from a user-driven perspective, capturing diverse real-world scenarios and progressively increasing task difficulty. It consists of three levels: Level 1 (Chart Reproduction) reproduces charts from a reference figure and user query; Level 2 (Chart Editing) involves complex modifications such as changing chart types or adding elements; and Level 3 (Long-Table to Chart Generation) requires models to transform long, information-dense tables into faithful charts following user instructions. To our knowledge, this is the first hierarchical benchmark that reflects practical chart2code usage while systematically scaling task complexity. In total, Chart2Code contains 2,023 tasks across 22 chart types, paired with multi-level evaluation metrics that assess both code correctness and the visual fidelity of rendered charts. We benchmark 25 state-of-the-art (SoTA) LMMs, including both proprietary and the latest open-source models such as GPT-5, Qwen2.5-VL, InternVL3/3.5, MiMo-VL, and Seed-1.6-VL. Experimental results demonstrate that even the SoTA model GPT-5 averages only 0.57 on code-based evaluation and 0.22 on chart-quality assessment across the editing tasks, underscoring the difficulty of Chart2Code. We anticipate this benchmark will drive advances in multimodal reasoning and foster the development of more robust and general-purpose LMMs. Our code and data are available on Chart2Code.
翻译:我们提出了Chart2Code,这是一个用于评估大型多模态模型(LMMs)图表理解与代码生成能力的新基准。Chart2Code明确从用户驱动视角设计,捕捉多样化的真实场景并逐步提升任务难度。它包含三个层级:层级1(图表复现)根据参考图和用户查询复现图表;层级2(图表编辑)涉及复杂的修改,例如更改图表类型或添加元素;层级3(长表格到图表生成)要求模型根据用户指令将信息密集的长表格转换为忠实反映数据的图表。据我们所知,这是首个在系统化扩展任务复杂度的同时反映实际图表到代码应用场景的分层基准。Chart2Code总共包含2,023个任务,涵盖22种图表类型,并配有多层级评估指标,以同时评估代码正确性和渲染图表的视觉保真度。我们对25个最先进的(SoTA)LMM进行了基准测试,包括专有模型和最新的开源模型,如GPT-5、Qwen2.5-VL、InternVL3/3.5、MiMo-VL和Seed-1.6-VL。实验结果表明,即使在SoTA模型GPT-5上,其在编辑任务中的代码评估平均得分也仅为0.57,图表质量评估平均得分仅为0.22,这凸显了Chart2Code的挑战性。我们预期该基准将推动多模态推理的进步,并促进开发更鲁棒、更通用的LMM。我们的代码和数据已在Chart2Code上开源。