We propose reproducing activation functions (RAFs) to improve deep learning accuracy for various applications ranging from computer vision to scientific computing. The idea is to employ several basic functions and their learnable linear combination to construct neuron-wise data-driven activation functions for each neuron. Armed with RAFs, neural networks (NNs) can reproduce traditional approximation tools and, therefore, approximate target functions with a smaller number of parameters than traditional NNs. In NN training, RAFs can generate neural tangent kernels (NTKs) with a better condition number than traditional activation functions lessening the spectral bias of deep learning. As demonstrated by extensive numerical tests, the proposed RAFs can facilitate the convergence of deep learning optimization for a solution with higher accuracy than existing deep learning solvers for audio/image/video reconstruction, PDEs, and eigenvalue problems. With RAFs, the errors of audio/video reconstruction, PDEs, and eigenvalue problems are decreased by over 14%, 73%, 99%, respectively, compared with baseline, while the performance of image reconstruction increases by 58%.
翻译:我们提议复制激活功能(RAFs),以提高从计算机视觉到科学计算等各种应用的深层次学习精确度。 想法是运用几种基本功能及其可学习的线性组合,为每个神经神经元构建神经元数据驱动激活功能。 配备了RAFs的神经网络(NNS)可以复制传统的近似工具,因此,近似目标功能的参数比传统的NNS要少。 在NN培训中,RAF可以产生比传统激活功能更好的神经切核内核(NTKs ), 其条件数目比传统的激活功能要好, 减少深层学习的光谱偏差。 广泛的数字测试表明, 拟议的RAFs可以促进深度学习优化, 以比现有的音效/图像/视频重建、 PDEs 和 egenvaly 问题深度学习解析器更精确的解决方案。 在RAFs, 音频/视频重建、 PDEs和egenvaly问题的错误分别比基线减少14%、73%、99%和99%, 而图像重建的绩效则增加58%。