Understanding deep learning is increasingly emergent as it penetrates more and more into industry and science. In recent years, a research line from Fourier analysis sheds lights into this magical "black box" by showing a Frequency Principle (F-Principle or spectral bias) of the training behavior of deep neural networks (DNNs) -- DNNs often fit functions from low to high frequency during the training. The F-Principle is first demonstrated by one-dimensional synthetic data followed by the verification in high-dimensional real datasets. A series of works subsequently enhance the validity of the F-Principle. This low-frequency implicit bias reveals the strength of neural network in learning low-frequency functions as well as its deficiency in learning high-frequency functions. Such understanding inspires the design of DNN-based algorithms in practical problems, explains experimental phenomena emerging in various scenarios, and further advances the study of deep learning from the frequency perspective. Although incomplete, we provide an overview of F-Principle and propose some open problems for future research.
翻译:深入的学习越来越为人所见。近年来,Fourier分析的一条研究线显示深神经网络(DNN)的培训行为频率原则(F-原则或光谱偏差),显示深神经网络(DNN)的频率原则(F-原则或光谱偏差),显示深神经网络(DNN)的培训行为,从而向这个神奇的“黑盒”亮出光芒。DNN在培训期间往往能从低到高的功能。F原则首先以一维合成数据为证明,然后在高维真实数据集中进行核查。一系列工作随后加强了F原则的有效性。这种低频隐含的偏差揭示了神经网络在学习低频功能方面的力量,以及它在学习高频功能方面的缺陷。这种了解启发了设计基于DNNN的算法在实际问题中的设计,解释了各种情景中出现的实验现象,并从频率角度进一步推进深学习的研究。虽然不完整,但我们提供了F-原则的概况,并为未来研究提出一些开放的问题。