Unsupervised neural nets such as Restricted Boltzmann Machines(RBMs) and Deep Belif Networks(DBNs), are powerful in automatic feature extraction,unsupervised weight initialization and density estimation. In this paper,we demonstrate that the parameters of these neural nets can be dramatically reduced without affecting their performance. We describe a method to reduce the parameters required by RBM which is the basic building block for deep architectures. Further we propose an unsupervised sparse deep architectures selection algorithm to form sparse deep neural networks.Experimental results show that there is virtually no loss in either generative or discriminative performance.
翻译:在本文中,我们证明这些神经网的参数可以在不影响其性能的情况下大幅降低。我们描述了一种减少按成果管理所需的参数的方法,这是深层建筑的基本构件。我们进一步提议一种不受监督的稀薄深层建筑选择算法,以形成稀薄的深神经网络。实验结果显示,在基因性或歧视性性能方面几乎没有任何损失。