Deep learning has been widely successful in practice and most state-of-the-art machine learning methods are based on neural networks. Lacking, however, is a rigorous mathematical theory that adequately explains the amazing performance of deep neural networks. In this article, we present a relatively new mathematical framework that provides the beginning of a deeper understanding of deep learning. This framework precisely characterizes the functional properties of neural networks that are trained to fit to data. The key mathematical tools which support this framework include transform-domain sparse regularization, the Radon transform of computed tomography, and approximation theory, which are all techniques deeply rooted in signal processing. This framework explains the effect of weight decay regularization in neural network training, the use of skip connections and low-rank weight matrices in network architectures, the role of sparsity in neural networks, and explains why neural networks can perform well in high-dimensional problems.
翻译:深层学习在实践中取得了广泛成功,大多数最先进的机器学习方法都以神经网络为基础。然而,缺少一种严格的数学理论,充分解释深层神经网络的惊人性能。在本篇文章中,我们提出了一个相对新的数学框架,为深入了解深层学习提供了开端。这个框架准确地描述了经过培训适合数据的神经网络的功能性能。支持这一框架的关键数学工具包括变换-数据稀疏的正规化、计算成像学的拉登转换和近似理论,这些都是在信号处理中深深扎根的技术。这个框架解释了神经网络培训中重质衰减正规化、网络结构中跳线连接和低位重量矩阵的使用、神经网络中复变的作用,并解释了神经网络在高维度问题中能够发挥良好作用的原因。