The Convolutional Neural Network (CNN) is one of the most prominent neural network architectures in deep learning. Despite its widespread adoption, our understanding of its universal approximation properties has been limited due to its intricate nature. CNNs inherently function as tensor-to-tensor mappings, preserving the spatial structure of input data. However, limited research has explored the universal approximation properties of fully convolutional neural networks as arbitrary continuous tensor-to-tensor functions. In this study, we demonstrate that CNNs, when utilizing zero padding, can approximate arbitrary continuous functions in cases where both the input and output values exhibit the same spatial shape. Additionally, we determine the minimum depth of the neural network required for approximation and substantiate its optimality. We also verify that deep, narrow CNNs possess the UAP as tensor-to-tensor functions. The results encompass a wide range of activation functions, and our research covers CNNs of all dimensions.
翻译:暂无翻译