We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with >1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. KiloGram is available at https://lil.nlp.cornell.edu/kilogram .
翻译:我们引入了KiloGram, 这是一种用于研究人类和机器中抽象视觉推理的资源。 我们利用Tangram 拼图的历史作为认知科学的刺激力, 构建了一个内容丰富的附加说明的数据集, 该数据集使用 > 1k 不同的刺激力, 其数量级比先前的资源大, 并且更加多样化。 它在视觉和语言上都更加丰富, 超越了整个形状描述, 包括分割图和部分标签。 我们利用这个资源来评估近期多模式的抽象视觉推理能力。 我们观察到, 预先训练的重量显示出有限的抽象推理能力, 并随着微调而大幅改进。 我们还观察到, 明确描述部分有助于人类和模型的抽象推理, 特别是当语言和视觉投入共同编码时。 KiloGram可以在 https://lil.nlp.cornell.edu/kilcugraph 上查阅 。