Accurate temporal extrapolation remains a fundamental challenge for neural operators modeling dynamical systems, where predictions must extend far beyond the training horizon. Conventional DeepONet approaches rely on two limited paradigms: fixed-horizon rollouts, which predict full spatiotemporal solutions while ignoring temporal causality, and autoregressive schemes, which accumulate errors through sequential prediction. We introduce TI-DeepONet, a framework that integrates neural operators with adaptive numerical time-stepping to preserve the Markovian structure of dynamical systems while mitigating long-term error growth. Our method shifts the learning objective from direct state prediction to approximating instantaneous time-derivative fields, which are then integrated using standard numerical solvers. This naturally enables continuous-time prediction and allows the use of higher-order integrators at inference than those used in training, improving both efficiency and accuracy. We further propose TI(L)-DeepONet, which incorporates learnable coefficients for intermediate slopes in multi-stage integration, adapting to solution-specific dynamics and enhancing fidelity. Across four canonical PDEs featuring chaotic, dissipative, dispersive, and high-dimensional behavior, TI(L)-DeepONet slightly outperforms TI-DeepONet, and both achieve major reductions in relative L2 extrapolation error: about 81% compared to autoregressive methods and 70% compared to fixed-horizon approaches. Notably, both models maintain stable predictions over temporal domains nearly twice the training interval. This work establishes a physics-aware operator learning framework that bridges neural approximation with numerical analysis principles, addressing a key gap in long-term forecasting of complex physical systems.
翻译:精确的时间外推仍然是用于建模动力系统的神经算子的一个基本挑战,其中预测必须远远超出训练时间范围。传统的DeepONet方法依赖于两种有限的范式:固定时间范围的滚动预测(预测完整的时空解但忽略时间因果性)和自回归方案(通过顺序预测累积误差)。我们提出了TI-DeepONet,这是一个将神经算子与自适应数值时间步进相结合的框架,以保持动力系统的马尔可夫结构,同时减轻长期误差增长。我们的方法将学习目标从直接状态预测转变为近似瞬时时间导数场,然后使用标准数值求解器进行积分。这自然实现了连续时间预测,并允许在推理时使用比训练时更高阶的积分器,从而提高效率和准确性。我们进一步提出了TI(L)-DeepONet,它结合了多阶段积分中中间斜率的可学习系数,以适应特定解的动力学并提高保真度。在四个具有混沌、耗散、色散和高维行为的典型偏微分方程中,TI(L)-DeepONet略微优于TI-DeepONet,并且两者都显著降低了相对L2外推误差:与自回归方法相比降低约81%,与固定时间范围方法相比降低约70%。值得注意的是,这两种模型在接近训练区间两倍的时间域内都能保持稳定的预测。这项工作建立了一个物理感知的算子学习框架,将神经近似与数值分析原理联系起来,解决了复杂物理系统长期预测中的一个关键空白。