A law of large numbers for the empirical distribution of parameters of a one-layer artificial neural networks with sparse connectivity is derived for a simultaneously increasing number of both, neurons and training iterations of the stochastic gradient descent.
翻译:对于连接性稀少的单层人工神经网络参数的实证分布,可以得出大量法律,用于同时增加神经元和培训迭代的随机梯度梯度。