The realization of complex classification tasks requires training of deep learning (DL) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. According to the DL rationale, the first convolutional layer reveals localized patterns in the input and large-scale patterns in the following layers, until it reliably characterizes a class of inputs. Here, we demonstrate that with a fixed ratio between the depths of the first and second convolutional layers, the error rates of the generalized shallow LeNet architecture, consisting of only five layers, decay as a power law with the number of filters in the first convolutional layer. The extrapolation of this power law indicates that the generalized LeNet can achieve small error rates that were previously obtained for the CIFAR-10 database using DL architectures. A power law with a similar exponent also characterizes the generalized VGG-16 architecture. However, this results in a significantly increased number of operations required to achieve a given error rate with respect to LeNet. This power law phenomenon governs various generalized LeNet and VGG-16 architectures, hinting at its universal behavior and suggesting a quantitative hierarchical time-space complexity among machine learning architectures. Additionally, the conservation law along the convolutional layers, which is the square-root of their size times their depth, is found to asymptotically minimize error rates. The efficient shallow learning that is demonstrated in this study calls for further quantitative examination using various databases and architectures and its accelerated implementation using future dedicated hardware developments.
翻译:实现复杂的分类任务需要培训深层次学习(DL)架构,由数十甚至数百个革命性、完全相连的隐蔽层组成,这与人类大脑的现实相去甚远。根据DL的理论依据,第一个革命层揭示了以下层输入和大规模模式中的局部模式,直到它可靠地描述出一个投入类别。在这里,我们证明,第一和第二革命层深度之间的固定比例,普通浅质 LeNet架构(仅由五层组成)的错误率,作为权力法的衰落,第一个革命层的过滤器数量众多。这一权力法的外推法表明,通用LeNet可以达到以前在使用DLL结构的CIFAR-10数据库中获得的小错误率和大尺度模式。一个具有类似推理的实力法也体现了普遍VGG-16结构的特征。然而,这导致在LeNet方面实现某种最低误率所需的操作数量大幅增加。这一权力法现象规范了各种通用的LeNet和VGG-16结构之间的进一步权力法,在第一个革命层的深度研究中暗示着其普遍行为和历史层次结构的深度,这是在学习其深度结构中展示了它们。