This paper introduces the concept of Microscopic Spatial Intelligence (MiSI), the capability to perceive and reason about the spatial relationships of invisible microscopic entities, which is fundamental to scientific discovery. To assess the potential of Vision-Language Models (VLMs) in this domain, we propose a systematic benchmark framework MiSI-Bench. This framework features over 163,000 question-answer pairs and 587,000 images derived from approximately 4,000 molecular structures, covering nine complementary tasks that evaluate abilities ranging from elementary spatial transformations to complex relational identifications. Experimental results reveal that current state-of-the-art VLMs perform significantly below human level on this benchmark. However, a fine-tuned 7B model demonstrates substantial potential, even surpassing humans in spatial transformation tasks, while its poor performance in scientifically-grounded tasks like hydrogen bond recognition underscores the necessity of integrating explicit domain knowledge for progress toward scientific AGI. The datasets are available at https://huggingface.co/datasets/zongzhao/MiSI-bench.
翻译:本文提出了微观空间智能(MiSI)的概念,即感知和推理不可见微观实体空间关系的能力,这是科学发现的基础。为评估视觉语言模型(VLMs)在该领域的潜力,我们提出了一个系统性基准框架 MiSI-Bench。该框架包含超过 163,000 个问答对和 587,000 张图像,这些数据源自约 4,000 个分子结构,涵盖九项互补任务,评估能力范围从基础空间变换到复杂关系识别。实验结果表明,当前最先进的 VLMs 在此基准测试中的表现显著低于人类水平。然而,经过微调的 7B 参数模型展现出巨大潜力,甚至在空间变换任务中超越人类,而其在氢键识别等科学基础任务上的较差表现,凸显了整合显式领域知识对于实现科学通用人工智能(AGI)进展的必要性。数据集可在 https://huggingface.co/datasets/zongzhao/MiSI-bench 获取。