Understanding how convolutional neural networks (CNNs) can efficiently learn high-dimensional functions remains a fundamental challenge. A popular belief is that these models harness the local and hierarchical structure of natural data such as images. Yet, we lack a quantitative understanding of how such structure affects performance, e.g. the rate of decay of the generalisation error with the number of training samples. In this paper, we study deep CNNs in the kernel regime. First, we show that the spectrum of the corresponding kernel inherits the hierarchical structure of the network, and we characterise its asymptotics. Then, we use this result together with generalisation bounds to prove that deep CNNs adapt to the spatial scale of the target function. In particular, we find that if the target function depends on low-dimensional subsets of adjacent input variables, then the rate of decay of the error is controlled by the effective dimensionality of these subsets. Conversely, if the teacher function depends on the full set of input variables, then the error rate is inversely proportional to the input dimension. We conclude by computing the rate when a deep CNN is trained on the output of another deep CNN with randomly-initialised parameters. Interestingly, we find that, despite their hierarchical structure, the functions generated by deep CNNs are too rich to be efficiently learnable in high dimension.
翻译:理解进化神经网络(CNNs)如何有效学习高维功能,这仍然是一个根本性的挑战。人们普遍认为,这些模型利用了自然数据(如图像)的本地和等级结构。然而,我们缺乏对此种结构如何影响性能的定量理解,例如,与培训样本数量相比,一般误差的衰减率。在本文中,我们研究内核系统中的深CNN。首先,我们显示,相应的内核的频谱继承了网络的等级结构,而我们则描绘出网络的零碎点。然后,我们利用这一结果和笼统的界限来证明深层次CNN能够适应目标功能的空间尺度。特别是,我们发现,如果目标功能依赖于相邻投入变量的低维次子集,那么错误率的衰减率就会受到这些子集的有效维度的控制。相反,如果教师的功能取决于全部输入变量,那么错误率就会与输入维度成反比。当深层次的CNNCM在高层次的参数上被训练时,我们通过计算出这个比率,而我们发现,在高层次的层次上却是另一个高层次的DNA。