Recent years have witnessed an increasing interest in the correspondence between infinitely wide networks and Gaussian processes. Despite the effectiveness and elegance of the current neural network Gaussian process theory, to the best of our knowledge, all the neural network Gaussian processes are essentially induced by increasing width. However, in the era of deep learning, what concerns us more regarding a neural network is its depth as well as how depth impacts the behaviors of a network. Inspired by a width-depth symmetry consideration, we use a shortcut network to show that increasing the depth of a neural network can also give rise to a Gaussian process, which is a valuable addition to the existing theory and contributes to revealing the true picture of deep learning. Beyond the proposed Gaussian process by depth, we theoretically characterize its uniform tightness property and the smallest eigenvalue of the Gaussian process kernel. These characterizations can not only enhance our understanding of the proposed depth-induced Gaussian process but also pave the way for future applications. Lastly, we examine the performance of the proposed Gaussian process by regression experiments on two benchmark data sets.
翻译:近些年来,人们对无限宽广的网络和高斯进程之间的对应关系越来越感兴趣。尽管目前神经网络高斯进程理论的效力和优雅,但根据我们的知识,所有神经网络高斯进程基本上都是由日益宽广的缘故所引发的。然而,在深层次学习的时代,我们对神经网络更为关切的是其深度,以及其深度如何影响网络的行为。在宽度深度对称考虑的启发下,我们使用捷径网络来表明,增加神经网络的深度也可以产生高斯进程,这是对现有理论的宝贵补充,有助于揭示深层学习的真实面貌。除了拟议的高斯进程外,我们从理论上从深度来描述其统一的紧凑性以及高斯进程内核最小的叶素价值。这些特征不仅能增进我们对拟议的深度诱导高斯进程的理解,而且为今后的应用铺平了道路。最后,我们通过两套基准数据集的回归实验,审视了高斯进程的业绩。