Neural network-based function approximation plays a pivotal role in the advancement of scientific computing and machine learning. Yet, training such models faces several challenges: (i) each target function often requires training a new model from scratch; (ii) performance is highly sensitive to architectural and hyperparameter choices; and (iii) models frequently generalize poorly beyond the training domain. To overcome these challenges, we propose a reusable initialization framework based on basis function pretraining. In this approach, basis neural networks are first trained to approximate families of polynomials on a reference domain. Their learned parameters are then used to initialize networks for more complex target functions. To enhance adaptability across arbitrary domains, we further introduce a domain mapping mechanism that transforms inputs into the reference domain, thereby preserving structural correspondence with the pretrained models. Extensive numerical experiments in one- and two-dimensional settings demonstrate substantial improvements in training efficiency, generalization, and model transferability, highlighting the promise of initialization-based strategies for scalable and modular neural function approximation. The full code is made publicly available on Gitee.
翻译:基于神经网络的函数逼近在科学计算和机器学习的发展中起着关键作用。然而,训练此类模型面临若干挑战:(i) 每个目标函数通常需要从头开始训练新模型;(ii) 性能对架构和超参数选择高度敏感;(iii) 模型在训练域之外经常泛化能力较差。为克服这些挑战,我们提出了一种基于基函数预训练的可重用初始化框架。该方法首先训练基神经网络以逼近参考域上的多项式族,然后将学习到的参数用于初始化更复杂目标函数的网络。为增强跨任意域的适应性,我们进一步引入了域映射机制,将输入变换到参考域,从而保持与预训练模型的结构对应性。在一维和二维场景下的大量数值实验表明,该方法在训练效率、泛化能力和模型可迁移性方面均有显著提升,凸显了基于初始化的策略在可扩展和模块化神经函数逼近中的潜力。完整代码已在Gitee平台公开。