We provide a general framework for studying recurrent neural networks (RNNs) trained by injecting noise into hidden states. Specifically, we consider RNNs that can be viewed as discretizations of stochastic differential equations driven by input data. This framework allows us to study the implicit regularization effect of general noise injection schemes by deriving an approximate explicit regularizer in the small noise regime. We find that, under reasonable assumptions, this implicit regularization promotes flatter minima; it biases towards models with more stable dynamics; and, in classification tasks, it favors models with larger classification margin. Sufficient conditions for global stability are obtained, highlighting the phenomenon of stochastic stabilization, where noise injection can improve stability during training. Our theory is supported by empirical results which demonstrate that the RNNs have improved robustness with respect to various input perturbations.
翻译:我们为研究通过向隐蔽国家注入噪音而培训的经常性神经网络提供了一个总体框架。 具体地说,我们认为,可以被视为由输入数据驱动的随机差异方程式分离的神经网络(RNN),这个框架使我们能够研究普通噪音注射计划的内在正规化效果,在小噪音制度中可以产生大致明确的常规化效果。 我们发现,根据合理的假设,这种隐含的正规化会助长低音;它偏向具有较稳定动态的模型;在分类任务中,它有利于具有较大分类余地的模型。 已经为全球稳定创造了充分的条件,突出了静态稳定现象,在培训期间注入噪音可以改善稳定性。 我们的理论得到了经验结果的支持,证明RNN在各种投入干扰方面更加稳健。