We state concentration inequalities for the output of the hidden layers of a stochastic deep neural network (SDNN), as well as for the output of the whole SDNN. These results allow us to introduce an expected classifier (EC), and to give probabilistic upper bound for the classification error of the EC. We also state the optimal number of layers for the SDNN via an optimal stopping procedure. We apply our analysis to a stochastic version of a feedforward neural network with ReLU activation function.
翻译:我们指出一个深孔神经网络(SDNN)隐藏层的输出以及整个SDNN的输出的集中不平等。 这些结果使我们能够引入一个预期的分类器(EC),并为EC的分类错误提供概率上限。 我们还通过一个最佳停止程序为SDN指出SDN的最佳层数。 我们的分析应用到一个带有RELU激活功能的进料前神经网络的随机版。