Deep belief networks (DBNs) are stochastic neural networks that can extract rich internal representations of the environment from the sensory data. DBNs had a catalytic effect in triggering the deep learning revolution, demonstrating for the very first time the feasibility of unsupervised learning in networks with many layers of hidden neurons. Thanks to their biological and cognitive plausibility, these hierarchical architectures have been also successfully exploited to build computational models of human perception and cognition in a variety of domains. However, learning in DBNs is usually carried out in a greedy, layer-wise fashion, which does not allow to simulate the holistic development of cortical circuits. Here we present iDBN, an iterative learning algorithm for DBNs that allows to jointly update the connection weights across all layers of the hierarchy. We test our algorithm on two different sets of visual stimuli, and we show that network development can also be tracked in terms of graph theoretical properties. DBNs trained using our iterative approach achieve a final performance comparable to that of the greedy counterparts, at the same time allowing to accurately analyze the gradual development of internal representations in the generative model. Our work paves the way to the use of iDBN for modeling neurocognitive development.
翻译:深信网络(DBNs)是能够从感官数据中提取丰富的环境内部描述的深层次神经网络。 DBNs在触发深层学习革命方面起到了催化作用,首次展示了在与许多层隐藏神经元的网络中进行不受监督的学习的可行性。由于它们的生物学和认知的可视性,这些等级结构也成功地被利用来建立人类认知和认知的计算模型。然而,在DBNs的学习通常以贪婪的、分层的方式进行,无法模拟皮层电路的整体发展。在这里,我们展示了 iDBN,这是DBNs的一个反复的学习算法,可以共同更新所有层次的连接权重。我们用两套不同的视觉模拟模型测试我们的算法,我们显示网络的开发也可以用图表理论属性进行跟踪。 使用我们的迭代方法培训的DBNs 取得了与贪婪对应方类似的最终性能,同时能够准确分析基因模型的逐步发展。