Deep neural networks unlocked a vast range of new applications by solving tasks of which many were previously deemed as reserved to higher human intelligence. One of the developments enabling this success was a boost in computing power provided by special purpose hardware. Further significant improvements in energy efficiency and speed require full parallelism and analog hardware, yet analogue neuron noise and its propagation, i.e. accumulation, threatens rendering such approaches inept. Here, we analyse for the first time the propagation of noise in parallel deep neural networks comprising noisy nonlinear neurons. We develop an analytical treatment for both, symmetric networks to highlight the underlying mechanisms, and networks trained with back propagation. We find that noise accumulation is generally bound, and adding additional network layers does not worsen the signal to noise ratio beyond this limit. Most importantly, noise accumulation can be suppressed entirely when neuron activation functions have a slope smaller than unity. We therefore developed the framework for noise of deep neural networks implemented in analog systems, and identify criteria allowing engineers to design noise-resilient novel neural network hardware.
翻译:深神经网络通过解决许多以前被认为是留给更高人类智能的复杂任务,打开了广泛的新应用。 促成这一成功的发展之一是特殊用途硬件提供的计算机动力的提高。 进一步大幅度提高能源效率和速度需要完全平行和模拟硬件,但模拟神经噪音及其传播,即累积,有可能使这种方法变得无从应用。 在这里,我们首次分析由吵闹的非线性神经元组成的平行深神经网络中噪音的传播。 我们为两者开发了一种分析处理方法,对称网络以突出基本机制,以及受过后传播训练的网络。 我们发现噪音积累一般是约束的,增加更多的网络层不会使噪音对噪音的信号比超出这一限度。 最重要的是,当神经激活功能的斜坡小于统一时,噪音积累可以完全被抑制。 因此,我们开发了在模拟系统中实施的深神经网络的噪音框架,并确定了工程师设计噪声反应新神经网络硬件的标准。