This article gives a new proof that fully connected neural networks with random weights and biases converge to Gaussian processes in the regime where the input dimension, output dimension, and depth are kept fixed, while the hidden layer widths tend to infinity. Unlike prior work, convergence is shown assuming only moment conditions for the distribution of weights and for quite general non-linearities.
翻译:本条提供了一个新的证据,证明完全连接的神经网络与随机权重和偏差的神经网络在固定输入维度、输出维度和深度的体系中汇合到高斯进程,而隐藏的层宽度往往无穷无尽。 与先前的工作不同,趋同显示的只是假设重量分配和相当一般的非线性的时间条件。