Neural networks transform high-dimensional data into compact, structured representations, often modeled as elements of a lower dimensional latent space. In this paper, we present an alternative interpretation of neural models as dynamical systems acting on the latent manifold. Specifically, we show that autoencoder models implicitly define a latent vector field on the manifold, derived by iteratively applying the encoding-decoding map, without any additional training. We observe that standard training procedures introduce inductive biases that lead to the emergence of attractor points within this vector field. Drawing on this insight, we propose to leverage the vector field as a representation for the network, providing a novel tool to analyze the properties of the model and the data. This representation enables to: (i) analyze the generalization and memorization regimes of neural models, even throughout training; (ii) extract prior knowledge encoded in the network's parameters from the attractors, without requiring any input data; (iii) identify out-of-distribution samples from their trajectories in the vector field. We further validate our approach on vision foundation models, showcasing the applicability and effectiveness of our method in real-world scenarios.
翻译:神经网络将高维数据转化为紧凑的结构化表示,通常建模为低维潜在空间的元素。本文提出一种将神经模型解释为作用于潜在流形上的动力系统的替代视角。具体而言,我们证明自编码器模型通过迭代应用编码-解码映射(无需额外训练),可在流形上隐式定义一个潜在向量场。我们观察到标准训练过程引入的归纳偏置会导致该向量场内吸引点的涌现。基于这一发现,我们提出利用该向量场作为网络的表示形式,为分析模型与数据的特性提供了一种新工具。该表示方法能够实现:(i) 分析神经模型的泛化与记忆机制,包括训练全过程;(ii) 无需任何输入数据,通过吸引子提取网络参数中编码的先验知识;(iii) 根据样本在向量场中的轨迹识别分布外样本。我们进一步在视觉基础模型上验证了该方法,展示了其在现实场景中的适用性与有效性。