The Hopfield model has a long-standing tradition in statistical physics, being one of the few neural networks for which a theory is available. Extending the theory of Hopfield models for correlated data could help understand the success of deep neural networks, for instance describing how they extract features from data. Motivated by this, we propose and investigate a generalized Hopfield model that we name Hidden-Manifold Hopfield Model: we generate the couplings from $P=\alpha N$ examples with the Hebb rule using a non-linear transformation of $D=\alpha_D N$ random vectors that we call factors, with $N$ the number of neurons. Using the replica method, we obtain a phase diagram for the model that shows a phase transition where the factors hidden in the examples become attractors of the dynamics; this phase exists above a critical value of $\alpha$ and below a critical value of $\alpha_D$. We call this behaviour learning transition.
翻译:Hopfield模型在统计物理学中有着悠久的传统,是少数可以提供理论的神经网络之一。将Hopfield模型的理论扩展至相关数据,有助于理解深度神经网络的成功,例如描述它们如何从数据中提取特征。出于这个目的,我们提出并研究了一个名为隐空间Hopfield模型的广义Hopfield模型:利用称为因子的$D=\alpha_D N$个随机向量的非线性变换从$P=\alpha N$个示例中使用Hebb规则生成耦合,其中$N$是神经元数量。利用副本方法,我们获得了该模型的相图,在该相图中,隐含在示例中的因子成为了动力学的吸引子,此时存在一个相,该相转变的临界值在$\alpha$的一个临界值以上,在$\alpha_D$的一个临界值以下。我们称这种行为为学习转变。