Deep belief networks (DBNs) are stochastic neural networks that can extract rich internal representations of the environment from the sensory data. DBNs had a catalytic effect in triggering the deep learning revolution, demonstrating for the very first time the feasibility of unsupervised learning in networks with many layers of hidden neurons. These hierarchical architectures incorporate plausible biological and cognitive properties, making them particularly appealing as computational models of human perception and cognition. However, learning in DBNs is usually carried out in a greedy, layer-wise fashion, which does not allow to simulate the holistic maturation of cortical circuits and prevents from modeling cognitive development. Here we present iDBN, an iterative learning algorithm for DBNs that allows to jointly update the connection weights across all layers of the model. We evaluate the proposed iterative algorithm on two different sets of visual stimuli, measuring the generative capabilities of the learned model and its potential to support supervised downstream tasks. We also track network development in terms of graph theoretical properties and investigate the potential extension of iDBN to continual learning scenarios. DBNs trained using our iterative approach achieve a final performance comparable to that of the greedy counterparts, at the same time allowing to accurately analyze the gradual development of internal representations in the deep network and the progressive improvement in task performance. Our work paves the way to the use of iDBN for modeling neurocognitive development.
翻译:深信仰网络(DBNs)是能够从感官数据中提取丰富的环境内部表现的深层次神经网络。 DBNs在触发深层次学习革命方面起到了催化作用,首次展示了在与多层隐性神经元的网络中进行不受监督的学习的可行性。这些等级结构包含合理的生物和认知特性,使其特别具有吸引力,作为人类认知和认知的计算模型。然而,在DBNs的学习通常以贪婪、分层方式进行,无法模拟神经神经电路的整体成熟,也无法模拟认知发展模型的建模。在这里,我们展示了 iDBNs,这是DBNs的反复学习算法,可以共同更新该模型各层的连接权重。我们评估了两套不同的视觉模拟模型的拟议迭代算法,以测量所学模型的感化能力及其支持监管下游任务的潜力。我们还从图表理论特性的角度跟踪网络的发展,并调查iDBN公司向持续学习情景的扩展。DBNs是用来在深度研究过程中进行不断学习的情景。DBN,DBNs是经过训练的反复分析,在我们的对等网络进行精确分析,最终表现可以比较。