We propose a new method to probe the learning mechanism of Deep Neural Networks (DNN) by perturbing the system using Noise Injection Nodes (NINs). These nodes inject uncorrelated noise via additional optimizable weights to existing feed-forward network architectures, without changing the optimization algorithm. We find that the system displays distinct phases during training, dictated by the scale of injected noise. We first derive expressions for the dynamics of the network and utilize a simple linear model as a test case. We find that in some cases, the evolution of the noise nodes is similar to that of the unperturbed loss, thus indicating the possibility of using NINs to learn more about the full system in the future.
翻译:我们提出一种新的方法来探测深神经网络的学习机制(DNN),通过使用噪音注射节点(NINs)对系统进行扰动。这些节点通过对现有进料前网络结构增加最优化的重量来注射与无孔相关的噪音,而不改变优化算法。我们发现该系统在训练期间显示不同的阶段,这取决于注射噪音的规模。我们首先为网络的动态产生表达方式,并使用简单的线性模型作为试验案例。我们发现,在某些情况下,噪音节点的演变与未受干扰损失的演变类似,从而表明今后有可能使用NINs来更多地了解整个系统。