Deep neural networks often present uncertainties such as hardware- and software-derived noise and randomness. We studied the effects of such uncertainty on learning outcomes, with a particular focus on the function of graphics processing units (GPUs), and found that GPU-induced uncertainty increased learning accuracy of a certain deep neural network. When training a predictive deep neural network using only the CPU without the GPU, the learning error is higher than when training the same number of epochs using the GPU, suggesting that the GPU plays a different role in the learning process than just increasing the computational speed. Because this effect cannot be observed in learning by a simple autoencoder, it could be a phenomenon specific to certain types of neural networks. GPU-specific computational processing is more indeterminate than that by CPUs, and hardware-derived uncertainties, which are often considered obstacles that need to be eliminated, might, in some cases, be successfully incorporated into the training of deep neural networks. Moreover, such uncertainties might be interesting phenomena to consider in brain-related computational processing, which comprises a large mass of uncertain signals.
翻译:深神经网络往往带来硬件和软件产生的噪音和随机性等不确定性。我们研究了这种不确定性对学习结果的影响,特别侧重于图形处理器(GPUs)的功能,发现GPU引起的不确定性提高了某种深神经网络的学习准确性。当培训仅使用没有GPU的CPU的预测性深神经网络时,学习错误要比培训使用GPU的相同数目的时代时高,这表明GPU在学习过程中发挥的作用不同于仅仅提高计算速度。由于这种影响无法在简单的自动编码器的学习中观察到,因此它可能是某些类型的神经网络特有的现象。GPU特有的计算处理比CPU和硬件生成的不确定性更不确定,而后者往往被认为是需要消除的障碍,在某些情况下,它们可能成功地被纳入深神经网络的培训中。此外,在与大脑有关的计算处理中,这种不确定性可能是令人感兴趣的现象,后者包含大量不确定的信号。