Physics-Informed Neural Networks (PINNs) have emerged as a powerful tool for solving partial differential equations (PDEs) in various scientific and engineering domains. However, traditional PINN architectures typically rely on fully connected multilayer perceptrons (MLPs), lacking the sparsity and modularity inherent in many traditional numerical solvers. This study investigates a novel approach by merging established PINN methodologies with brain-inspired neural network techniques to address this architectural limitation. We leverage Brain-Inspired Modular Training (BIMT), leveraging concepts such as locality, sparsity, and modularity inspired by the organization of the brain. Through BIMT, we demonstrate the evolution of PINN architectures from fully connected structures to highly sparse and modular forms, resulting in reduced computational complexity and memory requirements. We showcase the efficacy of this approach by solving differential equations with varying spectral components, revealing insights into the spectral bias phenomenon and its impact on neural network architecture. Moreover, we derive basic PINN building blocks through BIMT training on simple problems akin to convolutional and attention modules in deep neural networks, enabling the construction of modular PINN architectures. Our experiments show that these modular architectures offer improved accuracy compared to traditional fully connected MLP PINNs, showcasing their potential for enhancing PINN performance while reducing computational overhead. Overall, this study contributes to advancing the understanding and development of efficient and effective neural network architectures for solving PDEs, bridging the gap between PINNs and traditional numerical methods.
翻译:暂无翻译