Universal approximation, whether a set of functions can approximate an arbitrary function in a specific function space, has been actively studied in recent years owing to the significant development of neural networks. However, despite its extensive use, research on the universal properties of the convolutional neural network has been limited due to its complex nature. In this regard, we demonstrate the universal approximation theorem for convolutional neural networks. A convolution with padding outputs the data of the same shape as the input data; therefore, it is necessary to prove whether a convolutional neural network composed of convolutions can approximate such a function. We have shown that convolutional neural networks can approximate continuous functions whose input and output values have the same shape. In addition, the minimum depth of the neural network required for approximation was presented, and we proved that it is the optimal value. We also verified that convolutional neural networks with sufficiently deep layers have universality when the number of channels is limited.
翻译:近年来,由于神经网络的大规模发展,对一套功能能否在特定功能空间中达到任意功能进行了积极研究,对一套功能能否在特定功能空间中达到任意功能进行了近似研究;然而,尽管使用范围很广,但关于进化神经网络普遍特性的研究却因其复杂性质而受到限制;在这方面,我们展示了进化神经网络的通用近距离理论;将输入数据与输入数据相同形状的数据拼接在一起,因此,有必要证明由革命组成的进化神经网络能否接近这种功能。我们已经表明,进化神经网络可以接近其投入和输出值具有相同形状的连续功能。此外,提出了近流所需的神经网络最低深度,我们证明这是最佳值。我们还证实,在频道数量有限时,具有足够深层的进化神经网络具有普遍性。