In this paper, we introduces a Pseudo-Symplectic Neural Network (PSNN) for learning general Hamiltonian systems (both separable and non-separable) from data. To address the limitations of existing structure-preserving methods (e.g., implicit symplectic integrators restricted to separable systems or explicit approximations requiring high computational costs), PSNN integrates an explicit pseudo-symplectic integrator as its dynamical core, achieving nearly exact symplecticity with minimal structural error. Additionally, the authors propose learnable Pad\'e-type activation functions based on Pad\'e approximation theory, which empirically outperform classical ReLU, Taylor-based activations, and PAU. By combining these innovations, PSNN demonstrates superior performance in learning and forecasting diverse Hamiltonian systems (e.g., 2D modified pendulum, 4D galactic dynamics), surpassing state-of-the-art models in accuracy, long-term stability, and energy preservation, while requiring shorter training time, fewer samples, and reduced parameters. This framework bridges the gap between computational efficiency and geometric structure preservation in Hamiltonian system modeling.
翻译:暂无翻译