This work theoretically studies stochastic neural networks, a main type of neural network in use. Specifically, we prove that as the width of an optimized stochastic neural network tends to infinity, its predictive variance on the training set decreases to zero. Two common examples that our theory applies to are neural networks with dropout and variational autoencoders. Our result helps better understand how stochasticity affects the learning of neural networks and thus design better architectures for practical problems.
翻译:这项工作在理论上研究神经神经网络,这是正在使用的神经网络的主要类型。具体地说,我们证明,由于优化神经神经网络的宽度倾向于无限化,因此其培训设置的预测差异下降到零。我们理论中适用于神经网络的两个常见例子就是有辍学和变异自动电解器的神经网络。我们的结果有助于更好地了解神经网络的学习如何受到干扰,从而设计出更好的实际问题结构。