In neural network's Literature, Hebbian learning traditionally refers to the procedure by which the Hopfield model and its generalizations store archetypes (i.e., definite patterns that are experienced just once to form the synaptic matrix). However, the term "Learning" in Machine Learning refers to the ability of the machine to extract features from the supplied dataset (e.g., made of blurred examples of these archetypes), in order to make its own representation of the unavailable archetypes. Here, given a sample of examples, we define a supervised learning protocol by which the Hopfield network can infer the archetypes, and we detect the correct control parameters (including size and quality of the dataset) to depict a phase diagram for the system performance. We also prove that, for structureless datasets, the Hopfield model equipped with this supervised learning rule is equivalent to a restricted Boltzmann machine and this suggests an optimal and interpretable training routine. Finally, this approach is generalized to structured datasets: we highlight a quasi-ultrametric organization (reminiscent of replica-symmetry-breaking) in the analyzed datasets and, consequently, we introduce an additional "replica hidden layer" for its (partial) disentanglement, which is shown to improve MNIST classification from 75% to 95%, and to offer a new perspective on deep architectures.
翻译:在神经网络文献中, Hebbian 学习传统是指Hopfield 模型及其常规化存储成型类型(即,仅仅经历过一次的确定模式以形成合成矩阵矩阵)的程序。然而,机器学习中的“学习”一词是指机器从所提供的数据集中提取特征的能力(例如,由这些拱形型的模糊例子制作的),以便自己代表无法使用的成型类型。在这里,我们根据实例样本,定义了受监督的学习协议,让Hopfield 网络能够通过它推断成考古类型,我们检测正确的控制参数(包括数据集的大小和质量)以描述系统性能的阶段图。我们还证明,对于无结构的数据集而言,配备了这种受监督学习规则的Hopfield模型相当于一个受限的Boltzmann机器,这表明一种最佳和可解释的培训常规。最后,这一方法被概括为结构化的数据集:我们强调一个准三角结构化组织(从复制机的缩略图、正对结构的缩略图解的缩图),我们从再显示一个数据结构显示的“25度结构”,在分析中显示的“再版结构化结构结构化的“显示” 。