Large language models (LLMs) have demonstrated that large-scale pretraining enables systems to adapt rapidly to new problems with little supervision in the language domain. This success, however, has not translated as effectively to the visual domain, where models, including LLMs, continue to struggle with compositional understanding, sample efficiency, and general-purpose problem-solving. We investigate Video Diffusion Models (VDMs) as a promising direction for bridging this gap. Pretraining on spatiotemporal data endows these models with strong inductive biases for structure and dynamics, which we hypothesize can support broad task adaptability. To test this, we design a controlled evaluation in which both a pretrained LLM and a pretrained VDM are equipped with lightweight adapters and presented with tasks in their natural modalities. Across benchmarks including ARC-AGI, ConceptARC, visual games, route planning, and cellular automata, VDMs demonstrate higher data efficiency than their language counterparts. Taken together, our results indicate that video pretraining offers inductive biases that support progress toward visual foundation models.
翻译:大型语言模型(LLMs)已证明,在大规模预训练下,系统能够在语言领域以极少监督快速适应新问题。然而,这一成功尚未在视觉领域得到同等有效的转化,包括LLMs在内的模型仍在组合理解、样本效率和通用问题解决方面面临挑战。本研究探讨视频扩散模型(VDMs)作为弥合这一差距的潜在方向。在时空数据上的预训练赋予这些模型对结构与动态的强归纳偏置,我们假设这有助于支持广泛的任务适应性。为验证此假设,我们设计了对照评估:将预训练的LLM与预训练的VDM分别配备轻量级适配器,并在其自然模态下执行任务。在包括ARC-AGI、ConceptARC、视觉游戏、路径规划和元胞自动机在内的多项基准测试中,VDM表现出比语言模型更高的数据效率。综合结果表明,视频预训练提供的归纳偏置有助于推动视觉基础模型的发展。