A cognitive map is an internal model which encodes the abstract relationships among entities in the world, giving humans and animals the flexibility to adapt to new situations, with a strong out-of-distribution (OOD) generalization that current AI systems still do not possess. To bridge this gap, we introduce MapFormers, new architectures based on Transformer models, which can learn cognitive maps from observational data and perform path integration in parallel, in a self-supervised manner. Cognitive maps are learned in the model by disentangling structural relationships in the inputs from their specific content, a property that can be achieved naturally by updating the positional encoding in Transformers with input-dependent matrices. We developed two variants of MapFormers that unify absolute and relative positional encoding to model episodic (EM) and working memory (WM), respectively. We tested MapFormers on several tasks, including a classic 2D navigation task, showing that our models can learn a cognitive map of the underlying space and generalize OOD (e.g., to longer sequences) with near-perfect performance, unlike current architectures. Together, these results demonstrate the superiority of models designed to learn a cognitive map, and the importance of introducing a structural bias for structure-content disentanglement, which can be achieved in Transformers with input-dependent positional encoding. MapFormers have broad applications in both neuroscience and AI, by explaining the neural mechanisms giving rise to cognitive maps, while allowing these relation models to be learned at scale.
翻译:认知地图是一种内部模型,它编码了世界中实体间的抽象关系,赋予人类和动物适应新情境的灵活性,并具备当前人工智能系统尚不具备的强分布外(OOD)泛化能力。为弥合这一差距,我们提出了MapFormer——一种基于Transformer模型的新型架构,能够以自监督方式从观测数据中学习认知地图并并行执行路径整合。模型通过将输入中的结构关系与其具体内容解耦来学习认知地图,这一特性可通过在Transformer中使用输入依赖的矩阵更新位置编码自然实现。我们开发了两种MapFormer变体,分别通过统一绝对与相对位置编码来建模情景记忆(EM)和工作记忆(WM)。我们在多项任务上测试了MapFormer,包括经典的二维导航任务,结果表明我们的模型能够学习底层空间的认知地图,并以近乎完美的性能实现OOD泛化(例如对更长序列的泛化),而现有架构则无法做到。这些结果共同证明了专为学习认知地图设计的模型的优越性,以及引入结构偏置以实现结构-内容解耦的重要性——这可通过Transformer中输入依赖的位置编码实现。MapFormer在神经科学和人工智能领域具有广泛应用前景,既能解释认知地图产生的神经机制,又能支持大规模关系模型的学习。