Understanding how convolutional neural networks (CNNs) can efficiently learn high-dimensional functions remains a fundamental challenge. A popular belief is that these models harness the local and hierarchical structure of natural data such as images. Yet, we lack a quantitative understanding of how such structure affects performance, e.g. the rate of decay of the generalisation error with the number of training samples. In this paper, we study deep CNNs in the kernel regime. First, we show that the spectrum of the corresponding kernel inherits the hierarchical structure of the network, and we characterise its asymptotics. Then, we use this result together with generalisation bounds to prove that deep CNNs adapt to the spatial scale of the target function. In particular, we find that if the target function depends on low-dimensional subsets of adjacent input variables, then the rate of decay of the error is controlled by the effective dimensionality of these subsets. Conversely, if the teacher function depends on the full set of input variables, then the error rate is inversely proportional to the input dimension. We conclude by computing the rate when a deep CNN is trained on the output of another deep CNN with randomly-initialised parameters. Interestingly, we find that despite their hierarchical structure, the functions generated by deep CNNs are too rich to be efficiently learnable in high dimension.
翻译:理解进化神经网络(CNNs)如何有效学习高维功能仍然是一项根本性挑战。人们普遍认为,这些模型利用了自然数据(如图像)的本地和等级结构。然而,我们缺乏对此种结构如何影响性能的定量理解,例如,与培训样本数量相比,一般误差的衰减率。在本文中,我们研究内核系统中的深CNN。首先,我们显示,相应的内核的频谱继承了网络的等级结构,而我们则描绘出网络的零碎。然后,我们利用这一结果和笼统的界限来证明深层次CNN能够适应目标功能的空间尺度。特别是,我们发现,如果目标功能依赖于相邻投入变量的低维子集,那么错误的衰减率就会受到这些子集的有效维度的控制。相反,如果教师的功能取决于全套输入变量,那么错误率与输入维度是反比的。当深层次CNN在高端CNN上被训练为高端,我们通过高层次的级别参数来计算率时,我们最后通过计算出一个深层次CNNCN的CNN在高层次结构中发现高端的输出。