We develop a unified theoretical framework for neural architectures whose internal representations evolve as stationary states of dissipative Schr\"odinger-type dynamics on learned latent graphs. Each layer is defined by a fixed-point Schr\"odinger-type equation depending on a weighted Laplacian encoding latent geometry and a convex local potential. We prove existence, uniqueness, and smooth dependence of equilibria, and show that the dynamics are equivalent under the Bloch map to norm-preserving Landau--Lifshitz flows. Training over graph weights and topology is formulated as stochastic optimization on a stratified moduli space of graphs equipped with a natural K\"{a}hler--Hessian metric, ensuring convergence and differentiability across strata. We derive generalization bounds -- PAC-Bayes, stability, and Rademacher complexity -- in terms of geometric quantities such as edge count, maximal degree, and Gromov--Hausdorff distortion, establishing that sparsity and geometric regularity control capacity. Feed-forward composition of stationary layers is proven equivalent to a single global stationary diffusion on a supra-graph; backpropagation is its adjoint stationary system. Finally, directed and vector-valued extensions are represented as sheaf Laplacians with unitary connections, unifying scalar graph, directed, and sheaf-based architectures. The resulting model class provides a compact, geometrically interpretable, and analytically tractable foundation for learning latent graph geometry via fixed-point Schr\"odinger-type activations.
翻译:我们为神经架构建立了一个统一的理论框架,其内部表示作为耗散薛定谔型动力学在学习的潜在图上的稳态演化。每一层由一个定点薛定谔型方程定义,该方程依赖于编码潜在几何的加权拉普拉斯算子和一个凸局部势。我们证明了平衡态的存在性、唯一性及其光滑依赖性,并表明该动力学在布洛赫映射下等价于保范数的朗道-栗弗席兹流。图权重和拓扑的训练被表述为在分层模空间上的随机优化,该空间配备了自然的凯勒-黑塞度量,确保了跨层级的收敛性和可微性。我们推导了泛化界——PAC-Bayes、稳定性和Rademacher复杂度——以几何量(如边数、最大度和Gromov-Hausdorff畸变)表示,确立了稀疏性和几何正则性对容量的控制。稳态层的前馈组合被证明等价于在超图上的单一全局稳态扩散;反向传播是其伴随稳态系统。最后,有向和向量值扩展被表示为具有酉联络的层拉普拉斯算子,统一了标量图、有向图和基于层的架构。所得的模型类通过定点薛定谔型激活,为学习潜在图几何提供了一个紧凑、几何可解释且解析可处理的基础。