Compared with avid research activities of deep convolutional neural networks (DCNNs) in practice, the study of theoretical behaviors of DCNNs lags heavily behind. In particular, the universal consistency of DCNNs remains open. In this paper, we prove that implementing empirical risk minimization on DCNNs with expansive convolution (with zero-padding) is strongly universally consistent. Motivated by the universal consistency, we conduct a series of experiments to show that without any fully connected layers, DCNNs with expansive convolution perform not worse than the widely used deep neural networks with hybrid structure containing contracting (without zero-padding) convolution layers and several fully connected layers.
翻译:与实际深层进化神经网络(DCNN)的活跃研究活动相比,对DCNN的理论行为的研究远远落后于以往,特别是DCNN的理论行为仍然普遍一致。在本文中,我们证明,在扩展进化(零铺平)的情况下,对DCNN实施将经验风险最小化的做法是十分一致的。 以普遍一致性为动力,我们进行了一系列实验,以证明没有完全相连的层,具有扩展进化的DCNNNP的演化效果并不比广泛使用的含有包租(无零铺设)进化层和若干完全连接层的混合结构的深层神经网络差。