Recent years have witnessed an increasing interest in the correspondence between infinitely wide networks and Gaussian processes. Despite the effectiveness and elegance of the current neural network Gaussian process theory, to the best of our knowledge, all the neural network Gaussian processes are essentially induced by increasing width. However, in the era of deep learning, what concerns us more regarding a neural network is its depth as well as how depth impacts the behaviors of a network. Inspired by a width-depth symmetry consideration, we use a shortcut network to show that increasing the depth of a neural network can also give rise to a Gaussian process, which is a valuable addition to the existing theory and contributes to revealing the true picture of deep learning. Beyond the proposed Gaussian process by depth, we theoretically characterize its uniform tightness property and the smallest eigenvalue of its associated kernel. These characterizations can not only enhance our understanding of the proposed depth-induced Gaussian processes, but also pave the way for future applications. Lastly, we examine the performance of the proposed Gaussian process by regression experiments on two real-world data sets.
翻译:近些年来,人们对无限宽广的网络和高斯进程之间的对应关系越来越感兴趣。尽管目前神经网络高斯过程理论的效力和优雅,但根据我们的知识,所有高斯过程基本上都是由日益宽广的诱发的。然而,在深层次学习的时代,我们对神经网络更为关切的是其深度,以及其深度如何影响网络的行为。在宽度深度对称考虑的启发下,我们使用捷径网络来表明,增加神经网络的深度也可以产生高斯过程,这是对现有理论的宝贵补充,有助于揭示深层学习的真实情况。除了提议的高斯过程外,我们从理论上从深度来描述其统一的紧凑性特性及其相关内核最小的密封值。这些特征不仅能够增进我们对拟议的深度诱导的高斯过程的理解,而且还为今后的应用铺平了道路。最后,我们审视了两个现实世界数据集的回归实验所拟议的高斯过程的绩效。